One of the main hurdles of PGAS approaches is the dominance of MPI, which as a de-facto standard appears in the code basis of many applications. To take advantage of the PGAS APIs like GASPI without a major change in the code basis, interoperability between MPI and PGAS approaches needs to be ensured. In this article we consider an interoperable GASPI/MPI implementation for the communication/performance crucial parts of the Ludwig and iPIC3D applications. To address the discovered performance limitations, we develop a novel strategy for significantly improved performance and interoperability between both APIs by leveraging GASPI shared windows and shared notifications. First results with a corresponding implementation in the MiniGhost proxy...
The Message Passing Interface (MPI) can be used as a portable, high-performance programming model fo...
<p>For nearly two decades, the Message Passing Interface (MPI) has been an essential part of the Hig...
This work details the opportunities and challenges of porting a petascale-capable, MPI-based applica...
One of the main hurdles of partitioned global address space (PGAS) approaches is the dominance of me...
Abstract. Current scientific workflows consist of generally several com-ponents either integrated in...
EPiGRAM is a European Commission funded project to improve existing parallel programming models to r...
The Global Address Space Programming Interface (GPI) is the PGAS-API developed at the Fraunhofer ITW...
The current middleware stacks provide varying support for the Message Passing Interface (MPI) progra...
In this project we studied the practical use of the MPI message-passing interface in advanced distri...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past ...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
At the threshold to exascale computing, limitations of the MPI programming model become more and mor...
Modern high performance computing (HPC) applications, for example adaptive mesh refinement and mul...
This paper presents a portable optimization for MPI communications, called PRAcTICaL-MPI (Portable A...
The Message Passing Interface (MPI) can be used as a portable, high-performance programming model fo...
<p>For nearly two decades, the Message Passing Interface (MPI) has been an essential part of the Hig...
This work details the opportunities and challenges of porting a petascale-capable, MPI-based applica...
One of the main hurdles of partitioned global address space (PGAS) approaches is the dominance of me...
Abstract. Current scientific workflows consist of generally several com-ponents either integrated in...
EPiGRAM is a European Commission funded project to improve existing parallel programming models to r...
The Global Address Space Programming Interface (GPI) is the PGAS-API developed at the Fraunhofer ITW...
The current middleware stacks provide varying support for the Message Passing Interface (MPI) progra...
In this project we studied the practical use of the MPI message-passing interface in advanced distri...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past ...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
At the threshold to exascale computing, limitations of the MPI programming model become more and mor...
Modern high performance computing (HPC) applications, for example adaptive mesh refinement and mul...
This paper presents a portable optimization for MPI communications, called PRAcTICaL-MPI (Portable A...
The Message Passing Interface (MPI) can be used as a portable, high-performance programming model fo...
<p>For nearly two decades, the Message Passing Interface (MPI) has been an essential part of the Hig...
This work details the opportunities and challenges of porting a petascale-capable, MPI-based applica...