This paper presents an implementation of the Message Passing Interface called PACX-MPI. The major goal of the library is to support heterogeneous metacomputing for MPI applications by clustering MPP's and PVP's. The key concept of the library is a daemon-concept. We will focus in this paper on two aspects of this library. First we will show the importance of the usage of optimized algorithms for the global operations in such a metacomputing environment. And second we want to discuss whether we introduce a bottleneck by using daemon-nodes for the external communication. Keywords--- MPI, Metacomputing, Global Operations I. Why another MPI - Implementation ? In the last couple of years a large number of tools and libraries have bee...
Modern high performance computing (HPC) applications, for example adaptive mesh refinement and mul...
This paper presents a portable optimization for MPI communications, called PRAcTICaL-MPI (Portable A...
Supercomputing applications rely on strong scaling to achieve faster results on a larger number of p...
Since the beginning, mathematical description and analysis of physical processes has been a core are...
The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel appli...
Many parallel applications from scientific computing use MPI collective communication operations to ...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
We give an overview of the algorithms and implementations in the high-performance MPI libraries MPI/...
In order for collective communication routines to achieve high performance on different platforms, t...
This article gives an overview over recent and current metacomputing activities of the Computing Cen...
MPI provides a portable message passing interface for many parallel execution platforms but may lead...
The emergence of meta computers and computational grids makes it feasible to run parallel programs o...
The Message Passing Interface (MPI) is a standard in parallel computing, and can also be used as a h...
Previous studies of application usage show that the per-formance of collective communications are cr...
Message Passing Interface is widely used for Parallel and Distributed Computing. MPICH and LAM are p...
Modern high performance computing (HPC) applications, for example adaptive mesh refinement and mul...
This paper presents a portable optimization for MPI communications, called PRAcTICaL-MPI (Portable A...
Supercomputing applications rely on strong scaling to achieve faster results on a larger number of p...
Since the beginning, mathematical description and analysis of physical processes has been a core are...
The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel appli...
Many parallel applications from scientific computing use MPI collective communication operations to ...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
We give an overview of the algorithms and implementations in the high-performance MPI libraries MPI/...
In order for collective communication routines to achieve high performance on different platforms, t...
This article gives an overview over recent and current metacomputing activities of the Computing Cen...
MPI provides a portable message passing interface for many parallel execution platforms but may lead...
The emergence of meta computers and computational grids makes it feasible to run parallel programs o...
The Message Passing Interface (MPI) is a standard in parallel computing, and can also be used as a h...
Previous studies of application usage show that the per-formance of collective communications are cr...
Message Passing Interface is widely used for Parallel and Distributed Computing. MPICH and LAM are p...
Modern high performance computing (HPC) applications, for example adaptive mesh refinement and mul...
This paper presents a portable optimization for MPI communications, called PRAcTICaL-MPI (Portable A...
Supercomputing applications rely on strong scaling to achieve faster results on a larger number of p...