International audienceMPI is the most widely used parallel programming model. But the reducing amount of memory per compute core tends to push MPI to be mixed with shared-memory approaches like OpenMP. In such cases, the interoperability of those two models is challenging. The MPI 2.0 standard defines the so-called thread level to indicate how MPI will interact with threads. But even if hybrid programs are more common, there is still a lack in debugging tools and more precisely in thread level compliance. To fill this gap, we propose a static analysis to verify the thread-level required by an application. This work extends PARCOACH, a GCC plugin focused on the detection of MPI collective errors in MPI and MPI+OpenMP programs. We validated o...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceCollective MPI communications have to be executed in the same order by all pro...
International audienceScientific applications mainly rely on the MPI parallel programming model to r...
International audienceSupercomputers are rapidly evolving with now millions of processing units, pos...
Abstract-We propose an approach by integrating static and dynamic program analyses to detect threads...
International audienceNowadays most scientific applications are parallelized based on MPI communicat...
High-performance computing codes often combine the Message-Passing Interface (MPI) with a shared-mem...
International audienceThe advent to exascale requires more scalable and efficient techniques to help...
Many-core architectures, such as the Intel Xeon Phi, provide dozens of cores and hundreds of hardwar...
Proceedings of: First International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2014...
To provide increasing computational power for numerical simulations, supercomputers evolved and aren...
The demand for ever-growing computing capabilities in scientific computing and simulation has led to...
Holistic tuning and optimization of hybrid MPI and OpenMP applications is becoming focus for paralle...
Almost all high performance computing applications are written in MPI, which will continue to be the...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceCollective MPI communications have to be executed in the same order by all pro...
International audienceScientific applications mainly rely on the MPI parallel programming model to r...
International audienceSupercomputers are rapidly evolving with now millions of processing units, pos...
Abstract-We propose an approach by integrating static and dynamic program analyses to detect threads...
International audienceNowadays most scientific applications are parallelized based on MPI communicat...
High-performance computing codes often combine the Message-Passing Interface (MPI) with a shared-mem...
International audienceThe advent to exascale requires more scalable and efficient techniques to help...
Many-core architectures, such as the Intel Xeon Phi, provide dozens of cores and hundreds of hardwar...
Proceedings of: First International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2014...
To provide increasing computational power for numerical simulations, supercomputers evolved and aren...
The demand for ever-growing computing capabilities in scientific computing and simulation has led to...
Holistic tuning and optimization of hybrid MPI and OpenMP applications is becoming focus for paralle...
Almost all high performance computing applications are written in MPI, which will continue to be the...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
International audienceMPI-3 provide functions for non-blocking collectives. To help programmers intr...
International audienceCollective MPI communications have to be executed in the same order by all pro...