International audienceCollective MPI communications have to be executed in the same order by all processes in their ommunicator and the same number of times, otherwise a deadlock occurs. As soon as the controlflow involving these collective operations becomes more complex, in particular including conditionals on process ranks, ensuring the correction of such code is error-prone. We propose in this paper a static analysis to detect when such situation occurs, combined with a code transformation that prevents from eadlocking. We show on several benchmarks the small impact on performance and the ease of integration of our techniques in the development process
International audienceCheckpointing is a classical technique to mitigate the overhead of adjoint Al-...
Increasing computational demand of simulations motivates the use of parallel computing systems. At t...
International audienceBy allowing computation/communication overlap, MPI nonblocking collectives (NB...
International audienceNowadays most scientific applications are parallelized based on MPI communicat...
International audienceScientific applications mainly rely on the MPI parallel programming model to r...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceDetermining if a parallel program behaves as expected on any execution is chal...
The Message Passing Interface (MPI) is the standard API for parallelization in high-performance and ...
International audienceCommunications are a critical part of HPC simulations, and one of the main foc...
International audienceThe Message Passing Interface (MPI) is a parallel programming model used to ex...
Distributed systems are often developed using the message passing paradigm, where the only way to...
pre-printAbstract-Formal dynamic analysis of MPI programs is critically important since conventional...
International audienceMPI is the most widely used parallel programming model. But the reducing amoun...
Abstract. Formal dynamic analysis of Message Passing Interface (MPI) pro-grams is crucially importan...
International audienceThe advent to exascale requires more scalable and efficient techniques to help...
International audienceCheckpointing is a classical technique to mitigate the overhead of adjoint Al-...
Increasing computational demand of simulations motivates the use of parallel computing systems. At t...
International audienceBy allowing computation/communication overlap, MPI nonblocking collectives (NB...
International audienceNowadays most scientific applications are parallelized based on MPI communicat...
International audienceScientific applications mainly rely on the MPI parallel programming model to r...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceDetermining if a parallel program behaves as expected on any execution is chal...
The Message Passing Interface (MPI) is the standard API for parallelization in high-performance and ...
International audienceCommunications are a critical part of HPC simulations, and one of the main foc...
International audienceThe Message Passing Interface (MPI) is a parallel programming model used to ex...
Distributed systems are often developed using the message passing paradigm, where the only way to...
pre-printAbstract-Formal dynamic analysis of MPI programs is critically important since conventional...
International audienceMPI is the most widely used parallel programming model. But the reducing amoun...
Abstract. Formal dynamic analysis of Message Passing Interface (MPI) pro-grams is crucially importan...
International audienceThe advent to exascale requires more scalable and efficient techniques to help...
International audienceCheckpointing is a classical technique to mitigate the overhead of adjoint Al-...
Increasing computational demand of simulations motivates the use of parallel computing systems. At t...
International audienceBy allowing computation/communication overlap, MPI nonblocking collectives (NB...