This study focuses on gaining insights into the usage of the Message-Passing Interface (MPI) in a large set of High-Performance Computing (HPC) codes by analyzing MPI function calls and their argument usage patterns. Previous work has focused on analyzing MPI feature usage by statically matching function calls. However, this approach does not reveal common argument-specific call patterns or cross-interactions between MPI functions. In particular, MPI exposes its internal data structures using handles, and users pass these handles to MPI constructor functions, e.g., to create custom communicators. Tracking the relevant MPI arguments of these constructors and cross-referencing them with other MPI calls in a target code can reveal common user ...
In High Performance Computing (HPC), minimizing communication overhead is one of the most important ...
Distributed systems are often developed using the message passing paradigm, where the only way to...
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications rang...
The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel appli...
The Message Passing Interface (MPI) is the de-facto standard for distributed memory computing in hig...
International audienceIn HPC applications, one of the major overhead compared to sequentiel code, is...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...
Message Passing Interface (MPI) is the most commonly used paradigm in writing parallel programs sinc...
The Message-Passing Interface (MPI) is large and complex. Therefore, programming MPI is error prone....
How do we identify what is actually running on high-performance computing systems? Names of binarie...
AbstractHigh-end computing is universally recognized to be a strategic tool for leadership in scienc...
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in w...
International audienceOverlapping communications with computation is an efficient way to amortize th...
International audienceHPC systems have experienced significant growth over the past years, with mode...
MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past ...
In High Performance Computing (HPC), minimizing communication overhead is one of the most important ...
Distributed systems are often developed using the message passing paradigm, where the only way to...
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications rang...
The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel appli...
The Message Passing Interface (MPI) is the de-facto standard for distributed memory computing in hig...
International audienceIn HPC applications, one of the major overhead compared to sequentiel code, is...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...
Message Passing Interface (MPI) is the most commonly used paradigm in writing parallel programs sinc...
The Message-Passing Interface (MPI) is large and complex. Therefore, programming MPI is error prone....
How do we identify what is actually running on high-performance computing systems? Names of binarie...
AbstractHigh-end computing is universally recognized to be a strategic tool for leadership in scienc...
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in w...
International audienceOverlapping communications with computation is an efficient way to amortize th...
International audienceHPC systems have experienced significant growth over the past years, with mode...
MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past ...
In High Performance Computing (HPC), minimizing communication overhead is one of the most important ...
Distributed systems are often developed using the message passing paradigm, where the only way to...
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications rang...