Understanding the message passing behavior and network resource usage of distributed-memory messagepassing parallel applications is critical to achieving high performance and scalability. While much research has focused on how applications use critical compute related resources, relatively little attention has been devoted to characterizing the usage of network resources, specifically those needed by the network interface. This paper discusses the importance of understanding network interface resource usage requirements for parallel applications and describes an initial attempt to gather network resource usage data for several real-world codes. The results show widely varying usage patterns between processes in the same parallel job and ind...
Clusters of workstations are a popular platform for high-performance computing. For many parallel ap...
With processor speeds no longer doubling every 18-24 months owing to the exponential increase in pow...
In modern MPI applications, communication between separate computational nodes quickly add up to a s...
High performance computing can be associated with a method to improve the performance of an applica...
Modern cluster interconnection networks rely on processing on the network interface to deliver highe...
As commodity components continue to dominate the realm of high-end computing, two hardware trends ha...
Most parallel and sequential applications achieve a low percentage of the theoretical peak performan...
A benchmark test using the Message Passing Interface (MPI, an emerging standard for writing message ...
International audienceThe study of the performance of parallel applications may have different reaso...
The original publication can be found at www.springerlink.comThis paper gives an overview of two rel...
. In this paper, we describe experiments comparing the communication times for a number of different...
A case study was conducted to examine the performance and portability of parallel applications, with...
A commonly seen behavior of parallel applications is that their runtime is influenced by network com...
Over the last few decades, Message Passing Interface (MPI) has become the parallel-communication sta...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
Clusters of workstations are a popular platform for high-performance computing. For many parallel ap...
With processor speeds no longer doubling every 18-24 months owing to the exponential increase in pow...
In modern MPI applications, communication between separate computational nodes quickly add up to a s...
High performance computing can be associated with a method to improve the performance of an applica...
Modern cluster interconnection networks rely on processing on the network interface to deliver highe...
As commodity components continue to dominate the realm of high-end computing, two hardware trends ha...
Most parallel and sequential applications achieve a low percentage of the theoretical peak performan...
A benchmark test using the Message Passing Interface (MPI, an emerging standard for writing message ...
International audienceThe study of the performance of parallel applications may have different reaso...
The original publication can be found at www.springerlink.comThis paper gives an overview of two rel...
. In this paper, we describe experiments comparing the communication times for a number of different...
A case study was conducted to examine the performance and portability of parallel applications, with...
A commonly seen behavior of parallel applications is that their runtime is influenced by network com...
Over the last few decades, Message Passing Interface (MPI) has become the parallel-communication sta...
Moving data between processes has often been discussed as one of the major bottlenecks in parallel c...
Clusters of workstations are a popular platform for high-performance computing. For many parallel ap...
With processor speeds no longer doubling every 18-24 months owing to the exponential increase in pow...
In modern MPI applications, communication between separate computational nodes quickly add up to a s...