International audienceScientific applications mainly rely on the MPI parallel programming model to reach high performance on supercomputers. The advent of manycore architectures (larger number of cores and lower amount of memory per core) leads to mix MPI with a thread-based model like OpenMP. But integrating two different programming models inside the same application can be tricky and generate complex bugs. Thus, the correctness of hybrid programs requires a special care regarding MPI calls location. For example, identical MPI collective operations cannot be performed by multiple non-synchronized threads. To tackle this issue, this paper proposes a static analysis and a reduced dynamic instrumentation to detect bugs related to misuse of M...
As parallel systems are commonly being built out of increasingly large multi-core chips, application...
High-performance computing codes often combine the Message-Passing Interface (MPI) with a shared-mem...
L’utilisation du parallélisme des architectures actuelles dans le domaine du calcul hautes performan...
International audienceMPI is the most widely used parallel programming model. But the reducing amoun...
International audienceNowadays most scientific applications are parallelized based on MPI communicat...
International audienceCollective MPI communications have to be executed in the same order by all pro...
International audienceSupercomputers are rapidly evolving with now millions of processing units, pos...
International audienceThe advent to exascale requires more scalable and efficient techniques to help...
International audienceDetermining if a parallel program behaves as expected on any execution is chal...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceThe Message Passing Interface (MPI) is a parallel programming model used to ex...
Abstract-We propose an approach by integrating static and dynamic program analyses to detect threads...
Hybrid MPI+Threads programming has emerged as an alternative model to the “MPI everywhere ” model to...
Increasing computational demand of simulations motivates the use of parallel computing systems. At t...
International audienceBy allowing computation/communication overlap, MPI nonblocking collectives (NB...
As parallel systems are commonly being built out of increasingly large multi-core chips, application...
High-performance computing codes often combine the Message-Passing Interface (MPI) with a shared-mem...
L’utilisation du parallélisme des architectures actuelles dans le domaine du calcul hautes performan...
International audienceMPI is the most widely used parallel programming model. But the reducing amoun...
International audienceNowadays most scientific applications are parallelized based on MPI communicat...
International audienceCollective MPI communications have to be executed in the same order by all pro...
International audienceSupercomputers are rapidly evolving with now millions of processing units, pos...
International audienceThe advent to exascale requires more scalable and efficient techniques to help...
International audienceDetermining if a parallel program behaves as expected on any execution is chal...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceThe Message Passing Interface (MPI) is a parallel programming model used to ex...
Abstract-We propose an approach by integrating static and dynamic program analyses to detect threads...
Hybrid MPI+Threads programming has emerged as an alternative model to the “MPI everywhere ” model to...
Increasing computational demand of simulations motivates the use of parallel computing systems. At t...
International audienceBy allowing computation/communication overlap, MPI nonblocking collectives (NB...
As parallel systems are commonly being built out of increasingly large multi-core chips, application...
High-performance computing codes often combine the Message-Passing Interface (MPI) with a shared-mem...
L’utilisation du parallélisme des architectures actuelles dans le domaine du calcul hautes performan...