We use the PSTSWM compact application benchmark code to characterize the performance behavior of interprocessor communication on the SGI/Cray Research Origin 2000 and T3E-900. We measure 1. single processor performance, 2. point-to-point communication performance, 3. performance variation as a function of communication protocols and transport layer for collective communication routines, and 4. performance sensitivity of full application code to choice of parallel implementation. We also compare and contrast these results with similar results for the previous generation of parallel platforms, evaluating how the relative importance of communication performance has changed
. The performance of collective communication is critical to the overall system performance. In gene...
Most parallel and sequential applications achieve a low percentage of the theoretical peak performan...
. In this paper, we describe experiments comparing the communication times for a number of different...
In this paper the parallel benchmark code PSTSWM is used to evaluate the performance of the vendor-s...
In this paper we investigate some of the important factors which affect the message-passing performa...
The objective of this work is to compare the performance of three common environments for supporting...
In this paper we investigate some of the important factors which affect the message-passing performa...
Clusters of workstations are a popular platform for high-performance computing. For many parallel ap...
We have implemented eight of the MPI collective routines using MPI point-to-point communication rou...
Interprocessor communication overhead is a crucial measure of the power of parallel computing system...
A benchmark test using the Message Passing Interface (MPI, an emerging standard for writing message ...
The goal of high performance computing is executing very large problems in the least amount of time,...
This paper presents scalability and communication performance results for a cluster of PCs running ...
This work provides a systematic study of the impact of commu-nication performance on parallel applic...
The authors report on an experimental investigation of a parallel implementation of OSI protocol sof...
. The performance of collective communication is critical to the overall system performance. In gene...
Most parallel and sequential applications achieve a low percentage of the theoretical peak performan...
. In this paper, we describe experiments comparing the communication times for a number of different...
In this paper the parallel benchmark code PSTSWM is used to evaluate the performance of the vendor-s...
In this paper we investigate some of the important factors which affect the message-passing performa...
The objective of this work is to compare the performance of three common environments for supporting...
In this paper we investigate some of the important factors which affect the message-passing performa...
Clusters of workstations are a popular platform for high-performance computing. For many parallel ap...
We have implemented eight of the MPI collective routines using MPI point-to-point communication rou...
Interprocessor communication overhead is a crucial measure of the power of parallel computing system...
A benchmark test using the Message Passing Interface (MPI, an emerging standard for writing message ...
The goal of high performance computing is executing very large problems in the least amount of time,...
This paper presents scalability and communication performance results for a cluster of PCs running ...
This work provides a systematic study of the impact of commu-nication performance on parallel applic...
The authors report on an experimental investigation of a parallel implementation of OSI protocol sof...
. The performance of collective communication is critical to the overall system performance. In gene...
Most parallel and sequential applications achieve a low percentage of the theoretical peak performan...
. In this paper, we describe experiments comparing the communication times for a number of different...