Parallel computing is now popular and mainstream, but performance and ease-of-use remain elusive to many end-users. There exists a need for performance improvements that can be easily retrofitted to existing parallel applications. In this paper we present MPI process swapping, a simple performance enhancing add-on to the MPI programming paradigm. MPI process swapping improves performance by dynamically choosing the best available resources throughout application execution, using MPI process over-allocation and real-time performance measurement. Swapping provides fully automated performance monitoring and process management, and a rich set of primitives to control execution behavior manually or through an external tool. Swapping, as defi...
In the quest for extreme-scale supercomputers, the High Performance Computing (HPC) community has de...
Abstract: Mapping parallel applications to multi-processor architectures requires in-formation about...
The desire for high performance on scalable parallel systems is increasing the complexity and the...
Parallel computing is now popular and mainstream, but performance and ease-of-use remain elusive to ...
Parallel computing is now popular and mainstream, but performance and ease of use remain elusive to ...
Despite the enormous amount of research and develop-ment work in the area of parallel computing, it ...
This report describes an implementation of MPI-1 on the GENESIS cluster operating system and compare...
The work in this paper focuses on providing malleability to MPI applications by using a novel perfor...
Computation–communication overlap and good load balance are features central to high performance of ...
The new generation of parallel applications are complex, involve simulation of dynamically varying s...
We have developed a new MPI benchmark package called MPIBench that uses a very precise and portable ...
International audienceHigh-Performance Computing (HPC) is currently facing significant challenges. T...
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications rang...
We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic proce...
optimization, Abstract—MPI is the de facto standard for portable parallel programming on high-end sy...
In the quest for extreme-scale supercomputers, the High Performance Computing (HPC) community has de...
Abstract: Mapping parallel applications to multi-processor architectures requires in-formation about...
The desire for high performance on scalable parallel systems is increasing the complexity and the...
Parallel computing is now popular and mainstream, but performance and ease-of-use remain elusive to ...
Parallel computing is now popular and mainstream, but performance and ease of use remain elusive to ...
Despite the enormous amount of research and develop-ment work in the area of parallel computing, it ...
This report describes an implementation of MPI-1 on the GENESIS cluster operating system and compare...
The work in this paper focuses on providing malleability to MPI applications by using a novel perfor...
Computation–communication overlap and good load balance are features central to high performance of ...
The new generation of parallel applications are complex, involve simulation of dynamically varying s...
We have developed a new MPI benchmark package called MPIBench that uses a very precise and portable ...
International audienceHigh-Performance Computing (HPC) is currently facing significant challenges. T...
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications rang...
We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic proce...
optimization, Abstract—MPI is the de facto standard for portable parallel programming on high-end sy...
In the quest for extreme-scale supercomputers, the High Performance Computing (HPC) community has de...
Abstract: Mapping parallel applications to multi-processor architectures requires in-formation about...
The desire for high performance on scalable parallel systems is increasing the complexity and the...