This paper describes capabilities, evolution, performance, and applications of the Global Arrays (GA) toolkit. GA was created to provide application programmers with an interface that allows them to distribute data while maintaining the type of global index space and programming syntax similar to what is available when programming on a single processor. The goal of GA is to free the programmer from the low level management of communication and allow them to deal with their problems at the level at which they were originally formulated. At the same time, compatibility of GA with MPI enables the programmer to take advantage of the existing MPI software/libraries when available and appropriate. The variety of applications that have been implem...
Multidimensional arrays are an important data structure in many scientific applications. Unfortunate...
Large scale parallel simulations are fundamental tools for engineers and scientists. Consequently, i...
Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016...
Portability, efficiency, and ease of coding are all important considerations in choosing the program...
The Message Passing Interface (MPI) is the library-based programming model employed by most scalable...
The NAS Conjugate Gradient (CG) benchmark is an important scientific kernel used to evaluate machine...
The Message Passing Interface (MPI) is the library-based programming model employed by most scalable...
Abstract. The shared memory paradigm provides many benefits to the parallel programmer, particular w...
This paper discusses a strategy for implementing OpenMP on distributed memory systems that relies on...
In the realm of High Performance Computing (HPC), message passing has been the programming paradigm ...
LAPI is a low-level, high-performance communication interface available on the IBM RS/6000 SP system...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to im-pr...
The Global Address Space Programming Interface (GPI) is the PGAS-API developed at the Fraunhofer ITW...
Scientific programmers must optimize the total time-to-solution, the combination of software develop...
The global address space (GAS) programming model provides important potential productivity advantage...
Multidimensional arrays are an important data structure in many scientific applications. Unfortunate...
Large scale parallel simulations are fundamental tools for engineers and scientists. Consequently, i...
Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016...
Portability, efficiency, and ease of coding are all important considerations in choosing the program...
The Message Passing Interface (MPI) is the library-based programming model employed by most scalable...
The NAS Conjugate Gradient (CG) benchmark is an important scientific kernel used to evaluate machine...
The Message Passing Interface (MPI) is the library-based programming model employed by most scalable...
Abstract. The shared memory paradigm provides many benefits to the parallel programmer, particular w...
This paper discusses a strategy for implementing OpenMP on distributed memory systems that relies on...
In the realm of High Performance Computing (HPC), message passing has been the programming paradigm ...
LAPI is a low-level, high-performance communication interface available on the IBM RS/6000 SP system...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to im-pr...
The Global Address Space Programming Interface (GPI) is the PGAS-API developed at the Fraunhofer ITW...
Scientific programmers must optimize the total time-to-solution, the combination of software develop...
The global address space (GAS) programming model provides important potential productivity advantage...
Multidimensional arrays are an important data structure in many scientific applications. Unfortunate...
Large scale parallel simulations are fundamental tools for engineers and scientists. Consequently, i...
Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016...