Many parallel applications from scientific computing use MPI collective communication operations to collect or distribute data. Since the execution times of these communication operations increase with the number of participating processors, scalability problems might occur. In this article, we show for different MPI implementations how the execution time of collective communication operations can be significantly improved by a restructuring based on orthogonal processor structures with two or more levels. As platform, we consider a dual Xeon cluster, a Beowulf cluster and a Cray T3E with different MPI implementations. We show that the execution time of operations like MPI Bcast or MPI Allgather can be reduced by 40% and 70% on the dual Xeo...
Proceedings of: First International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2014...
We have implemented eight of the MPI collective routines using MPI point-to-point communication rou...
We describe a methodology for developing high performance programs running on clusters of SMP no...
Many parallel applications from scientific computing use MPI collective communication operations to ...
Many parallel applications from scientific computing use MPI collective communication operations to ...
Abstract Many parallel applications from scientific computing use collective MPI communication oper-...
This talk discusses optimized collective algorithms and the benefits of leveraging independent hardw...
Further performance improvements of parallel simulation applications will not be reached by simply s...
MPI provides a portable message passing interface for many parallel execution platforms but may lead...
Parallel computing on clusters of workstations and personal computers has very high potential, sinc...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Collective communications occupy 20-90% of total execution times in many MPI applications. In this p...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
Previous studies of application usage show that the per-formance of collective communications are cr...
Parallel computing on clusters of workstations and personal computers has very high potential, since...
Proceedings of: First International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2014...
We have implemented eight of the MPI collective routines using MPI point-to-point communication rou...
We describe a methodology for developing high performance programs running on clusters of SMP no...
Many parallel applications from scientific computing use MPI collective communication operations to ...
Many parallel applications from scientific computing use MPI collective communication operations to ...
Abstract Many parallel applications from scientific computing use collective MPI communication oper-...
This talk discusses optimized collective algorithms and the benefits of leveraging independent hardw...
Further performance improvements of parallel simulation applications will not be reached by simply s...
MPI provides a portable message passing interface for many parallel execution platforms but may lead...
Parallel computing on clusters of workstations and personal computers has very high potential, sinc...
In exascale computing era, applications are executed at larger scale than ever before, whichresults ...
Collective communications occupy 20-90% of total execution times in many MPI applications. In this p...
This work presents and evaluates algorithms for MPI collective communication operations on high perf...
Previous studies of application usage show that the per-formance of collective communications are cr...
Parallel computing on clusters of workstations and personal computers has very high potential, since...
Proceedings of: First International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2014...
We have implemented eight of the MPI collective routines using MPI point-to-point communication rou...
We describe a methodology for developing high performance programs running on clusters of SMP no...