It is now widely recognized that increased levels of parallelism is a necessary condition for improved application performance on multicore computers. However, as the number of cores increases, the memory-per-core ratio is expected to further decrease, making per-core memory efficiency of parallel programs an even more important concern in future systems. For many parallel applications, the memory requirements can be significantly larger than for their sequential counterparts and, more importantly, their memory utilization depends critically on the schedule used when running them. To address this problem we propose bounded memory scheduling (BMS) for parallel programs expressed as dynamic task graphs, in which an upper bound is imposed on t...
AbstractA model for parallel and distributed programs, the dynamic process graph (DPG), is investiga...
International audienceThis work focuses on dynamic DAG scheduling under memory constraints. We targe...
(eng) The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems....
The era of manycore computing will bring new fundamental challenges that the techniques designed for...
Scientific workflows are frequently modeled as Directed Acyclic Graphs (DAGs) oftasks, which represe...
Article dans revue scientifique avec comité de lecture.Scheduling large task graphs is an important ...
Task graphs are used for scheduling tasks on parallel processors when the tasks have dependencies. I...
International audienceIn this paper, we present an efffficient algorithm for compile time scheduling ...
International audienceScientific workflows are frequently modeled as Directed Acyclic Graphs (DAG) o...
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an ed...
Scheduling problems are essential for decision making in many academic disciplines, including operat...
In this paper we study a scheduling problem arising from executing numerical simulations on HPC arch...
Many of today's high level parallel languages support dynamic, fine-grained parallelism. These ...
In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted...
This paper addresses the problem of scheduling iterative task graphs on distributed memory architect...
AbstractA model for parallel and distributed programs, the dynamic process graph (DPG), is investiga...
International audienceThis work focuses on dynamic DAG scheduling under memory constraints. We targe...
(eng) The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems....
The era of manycore computing will bring new fundamental challenges that the techniques designed for...
Scientific workflows are frequently modeled as Directed Acyclic Graphs (DAGs) oftasks, which represe...
Article dans revue scientifique avec comité de lecture.Scheduling large task graphs is an important ...
Task graphs are used for scheduling tasks on parallel processors when the tasks have dependencies. I...
International audienceIn this paper, we present an efffficient algorithm for compile time scheduling ...
International audienceScientific workflows are frequently modeled as Directed Acyclic Graphs (DAG) o...
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an ed...
Scheduling problems are essential for decision making in many academic disciplines, including operat...
In this paper we study a scheduling problem arising from executing numerical simulations on HPC arch...
Many of today's high level parallel languages support dynamic, fine-grained parallelism. These ...
In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted...
This paper addresses the problem of scheduling iterative task graphs on distributed memory architect...
AbstractA model for parallel and distributed programs, the dynamic process graph (DPG), is investiga...
International audienceThis work focuses on dynamic DAG scheduling under memory constraints. We targe...
(eng) The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems....