Asynchronous task-based programming models are gaining popularity to address the programmability and performance challenges in high performance computing. One of the main attractions of these models and runtimes is their potential to automatically expose and exploit overlap of computation with communication. However, we find that inefficient interactions between these programming models and the underlying messaging layer (in most cases, MPI) limit the achievable computation-communication overlap and negatively impact the performance of parallel programs. We address this challenge by exposing and exploiting information about MPI internals in a task-based runtime system to make better task-creation and scheduling decisions. In particular, we ...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
International audienceWhile task-based programming, such as OpenMP, is a promising solution to explo...
Asynchronous task-based programming models are gaining popularity to address the programmability and...
A previous version of this document was submitted for publication by october 2008.Communication over...
This talk discusses optimized collective algorithms and the benefits of leveraging independent hardw...
Editors: Michael Klemm; Bronis R. de Supinski et al.International audienceHeterogeneous supercompute...
In High Performance Computing (HPC), minimizing communication overhead is one of the most important ...
Even today supercomputing systems have already reached millions of cores for a single machine, which...
Abstract—In recent years more and more applications have been using irregular computation models in ...
Hiding communication latency is an important optimization for parallel programs. Programmers or com...
In modern MPI applications, communication between separate computational nodes quickly add up to a s...
Task-based programming models are increasingly being adopted due to their ability to express paralle...
International audienceParallel runtime systems such as MPI or task-based libraries provide models to...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
International audienceWhile task-based programming, such as OpenMP, is a promising solution to explo...
Asynchronous task-based programming models are gaining popularity to address the programmability and...
A previous version of this document was submitted for publication by october 2008.Communication over...
This talk discusses optimized collective algorithms and the benefits of leveraging independent hardw...
Editors: Michael Klemm; Bronis R. de Supinski et al.International audienceHeterogeneous supercompute...
In High Performance Computing (HPC), minimizing communication overhead is one of the most important ...
Even today supercomputing systems have already reached millions of cores for a single machine, which...
Abstract—In recent years more and more applications have been using irregular computation models in ...
Hiding communication latency is an important optimization for parallel programs. Programmers or com...
In modern MPI applications, communication between separate computational nodes quickly add up to a s...
Task-based programming models are increasingly being adopted due to their ability to express paralle...
International audienceParallel runtime systems such as MPI or task-based libraries provide models to...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
International audienceWhile task-based programming, such as OpenMP, is a promising solution to explo...