The Message Passing Interface(MPI) has become a de-facto standard for parallel programming. The ultimate goal of parallel processing is high performance and this brings a motivation for a highly optimized MPI - implementation. When an application calls an MPI communications routine, data is copied between user memory and the memory areas managed by the MPI library. The speed of this transfer depends on a multitude of factors, including the architecture, amount of data, data layout and whether the data is referenced right before or after a transfer. There are numerous ways to copy data from one location to another, and their characteristics combined with the data properties will yield different efficiency. The information needed to select t...
International audienceWith the increased failure rate expected in future extreme scale supercomputer...
Abstract: Mapping parallel applications to multi-processor architectures requires in-formation about...
The Message Passing Interface (MPI) has been widely used in the area of parallel computing due to it...
The availability of cheap computers with outstanding single-processor performance coupled with Ether...
This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techn...
optimization, Abstract—MPI is the de facto standard for portable parallel programming on high-end sy...
Abstract. MPI derived datatypes allow users to describe noncontiguous memory layout and communicate ...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...
The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel appli...
The new generation of parallel applications are complex, involve simulation of dynamically varying s...
This paper addresses performance portability of MPI code on multiprogrammed shared memory machines. ...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
We have developed a new MPI benchmark package called MPIBench that uses a very precise and portable ...
In order for collective communication routines to achieve high performance on different platforms, t...
The emergence of multicore processors raises the need to efficiently transfer large amounts of data ...
International audienceWith the increased failure rate expected in future extreme scale supercomputer...
Abstract: Mapping parallel applications to multi-processor architectures requires in-formation about...
The Message Passing Interface (MPI) has been widely used in the area of parallel computing due to it...
The availability of cheap computers with outstanding single-processor performance coupled with Ether...
This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techn...
optimization, Abstract—MPI is the de facto standard for portable parallel programming on high-end sy...
Abstract. MPI derived datatypes allow users to describe noncontiguous memory layout and communicate ...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...
The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel appli...
The new generation of parallel applications are complex, involve simulation of dynamically varying s...
This paper addresses performance portability of MPI code on multiprogrammed shared memory machines. ...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
We have developed a new MPI benchmark package called MPIBench that uses a very precise and portable ...
In order for collective communication routines to achieve high performance on different platforms, t...
The emergence of multicore processors raises the need to efficiently transfer large amounts of data ...
International audienceWith the increased failure rate expected in future extreme scale supercomputer...
Abstract: Mapping parallel applications to multi-processor architectures requires in-formation about...
The Message Passing Interface (MPI) has been widely used in the area of parallel computing due to it...