Writing parallel applications for computational grids is a challenging task. To achieve good performance, algorithms designed for local area networks must be adapted to the differences in link speeds. An important class of algorithms are collective operations, such as broadcast and reduce. We have developed MAGPIE, a library of collective communication operations optimized for wide area systems. MAGPIE's algorithms send the minimal amount of data over the slow wide area links, and only incur a single wide area latency. Using our system, existing MPI applications can be run unmodified on geographically distributed systems. On moderate cluster sizes, using a wide area latency of 10 milliseconds and a bandwidth of 1 MByte/s, MAGPIE execut...
Many parallel applications from scientific computing use MPI collective communication operations to ...
The Message Passing Interface (MPI) is a standard in parallel computing, and can also be used as a h...
In order for collective communication routines to achieve high performance on different platforms, t...
The emergence of meta computers and computational grids makes it feasible to run parallel programs o...
The emergence of meta computers and computational grids makes it feasible to run parallel programs o...
Several MPI systems for Grid environment, in which clusters are connected by wide-area networks, hav...
The authors design and implement a dynamic and effective communication MPI (Message-Passing Interfac...
Collective Communication Operations are widely used in MPI applications and play an important role i...
Metacomputing infrastructures couple multiple clusters (or MPPs) via wide-area networks. A major pro...
Collective communications occupy 20-90% of total execution times in many MPI applications. In this p...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
Collective communication is an important subset of Message Passing Interface. Improving the perform...
Parallel computing on clusters of workstations and personal computers has very high potential, sinc...
Parallel computing on clusters of workstations and personal computers has very high potential, since...
Previous studies of application usage show that the per-formance of collective communications are cr...
Many parallel applications from scientific computing use MPI collective communication operations to ...
The Message Passing Interface (MPI) is a standard in parallel computing, and can also be used as a h...
In order for collective communication routines to achieve high performance on different platforms, t...
The emergence of meta computers and computational grids makes it feasible to run parallel programs o...
The emergence of meta computers and computational grids makes it feasible to run parallel programs o...
Several MPI systems for Grid environment, in which clusters are connected by wide-area networks, hav...
The authors design and implement a dynamic and effective communication MPI (Message-Passing Interfac...
Collective Communication Operations are widely used in MPI applications and play an important role i...
Metacomputing infrastructures couple multiple clusters (or MPPs) via wide-area networks. A major pro...
Collective communications occupy 20-90% of total execution times in many MPI applications. In this p...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
Collective communication is an important subset of Message Passing Interface. Improving the perform...
Parallel computing on clusters of workstations and personal computers has very high potential, sinc...
Parallel computing on clusters of workstations and personal computers has very high potential, since...
Previous studies of application usage show that the per-formance of collective communications are cr...
Many parallel applications from scientific computing use MPI collective communication operations to ...
The Message Passing Interface (MPI) is a standard in parallel computing, and can also be used as a h...
In order for collective communication routines to achieve high performance on different platforms, t...