The Data-Flow Graph (DFG) of a parallel application is frequently used to take scheduling decisions, based on the information that it models (dependencies among the tasks and volume of exchanged data). In the case of MPI-based programs, the DFG may be built at run-time by overload-ing the data exchange primitives. This article presents a li-brary that enables the generation of the DFG of a MPI pro-gram, and its use to analyze the network contention on a test-application: the Linpack benchmark. It is the first step towards automatic mapping of a MPI program on a dis-tributed architecture. 1
This paper describes a method of analysis for detecting and minimizing memory latency using a direct...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
Computational grids promise to deliver a vast computer power as transparently as the electric power ...
The Message Passing Interface (MPI) standard defines virtual topologies that can be applied to syste...
The need for intuitive parallel programming designs has grown with the rise of modern many-core proc...
Many parallel and distributed applications have well defined structure which can be described by few...
The critical path is one of the fundamental runtime characteristics of a parallel program. It identi...
This paper presents a novel method for the analysis and representation of parallel program with MPI....
Message Passing Interface (MPI) is the most commonly used paradigm in writing parallel programs sinc...
Network contention has an increasingly adverse effect on the performance of parallel applications wi...
Abstract. Most scheduling algorithms for computational grids rely on an ap-plication model represent...
International audienceFinely tuning MPI applications (number of processes, granularity, collectiveop...
Currently, most scientific applications based on MPI adopt a compute-centric architecture. Needed da...
In this contribution we present an optimised method for mapping of data-flow graphs onto parallel pr...
A considerably fraction of science discovery is nowadays relying on computer simulations. High Per...
This paper describes a method of analysis for detecting and minimizing memory latency using a direct...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
Computational grids promise to deliver a vast computer power as transparently as the electric power ...
The Message Passing Interface (MPI) standard defines virtual topologies that can be applied to syste...
The need for intuitive parallel programming designs has grown with the rise of modern many-core proc...
Many parallel and distributed applications have well defined structure which can be described by few...
The critical path is one of the fundamental runtime characteristics of a parallel program. It identi...
This paper presents a novel method for the analysis and representation of parallel program with MPI....
Message Passing Interface (MPI) is the most commonly used paradigm in writing parallel programs sinc...
Network contention has an increasingly adverse effect on the performance of parallel applications wi...
Abstract. Most scheduling algorithms for computational grids rely on an ap-plication model represent...
International audienceFinely tuning MPI applications (number of processes, granularity, collectiveop...
Currently, most scientific applications based on MPI adopt a compute-centric architecture. Needed da...
In this contribution we present an optimised method for mapping of data-flow graphs onto parallel pr...
A considerably fraction of science discovery is nowadays relying on computer simulations. High Per...
This paper describes a method of analysis for detecting and minimizing memory latency using a direct...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
Computational grids promise to deliver a vast computer power as transparently as the electric power ...