Abstract—As the number of cores per node increases in modern clusters, intra-node communication efficiency becomes critical to application performance. We present a study of the traditional double-copy model in MPICH2 and a kernel-assisted single-copy strategy with KNEM on different shared-memory hosts with up to 96 cores. We show that KNEM suffers less from process placement on these complex architectures. It improves throughput up to a factor of 2 for large messages for both point-to-point and collective operations, and significantly improves NPB execution time. We detail when to switch from one strategy to the other depending on the communication pattern and we show that I/OAT copy offload only appears to be an interesting solution for o...
Two-phase I/O is a well-known strategy for implementing collective MPI-IO functions. It redistribute...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...
International audienceAs the number of cores per node increases in modern clusters, intra-node commu...
The emergence of multicore processors raises the need to efficiently transfer large amounts of data ...
International audienceThe multiplication of cores in today's architectures raises the importance of ...
Abstract. This paper presents a method to efficiently place MPI pro-cesses on multicore machines. Si...
More memory hierarchies, NUMA architectures and network-style interconnection are widely used in mod...
Abstract — Modern processors have multiple cores on a chip to overcome power consumption and heat di...
International audienceThe increasing number of cores per node in high-performance computing requires...
Multicore or many-core clusters have become the most prominent form of High Performance Computing (H...
International audienceTo amortize the cost of MPI collective operations, non-blocking collectives ha...
Many parallel applications from scientific computing use MPI collective communication operations to ...
International audienceTo amortize the cost of MPI collective operations, nonblocking collectives hav...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Two-phase I/O is a well-known strategy for implementing collective MPI-IO functions. It redistribute...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...
International audienceAs the number of cores per node increases in modern clusters, intra-node commu...
The emergence of multicore processors raises the need to efficiently transfer large amounts of data ...
International audienceThe multiplication of cores in today's architectures raises the importance of ...
Abstract. This paper presents a method to efficiently place MPI pro-cesses on multicore machines. Si...
More memory hierarchies, NUMA architectures and network-style interconnection are widely used in mod...
Abstract — Modern processors have multiple cores on a chip to overcome power consumption and heat di...
International audienceThe increasing number of cores per node in high-performance computing requires...
Multicore or many-core clusters have become the most prominent form of High Performance Computing (H...
International audienceTo amortize the cost of MPI collective operations, non-blocking collectives ha...
Many parallel applications from scientific computing use MPI collective communication operations to ...
International audienceTo amortize the cost of MPI collective operations, nonblocking collectives hav...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Two-phase I/O is a well-known strategy for implementing collective MPI-IO functions. It redistribute...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
MPI is widely used for programming large HPC clusters. MPI also includes persistent operations, whic...