As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size
A majority of parallel applications achieve parallelism by partitioning data over multiple processo...
Further performance improvements of parallel simulation applications will not be reached by simply s...
Abstract Many parallel applications from scientific computing use collective MPI communication oper-...
Abstract—I/O performance is vital for most HPC applications especially those that generate a vast am...
International audienceThe historical gap between processing and data access speeds causes many appli...
Abstract—The well-known gap between relative CPU speeds and storage bandwidth results in the need fo...
The increasing number of cores per node has propelled the performance of leadershipscale systems fro...
Many parallel applications from scientific computing use MPI collective communication operations to ...
. The performance of collective communication is critical to the overall system performance. In gene...
Abstract. In this paper we present the design, implementation and evaluation of a runtime system bas...
Abstract—MPI collective I/O is a widely used I/O method that helps data-intensive scientific applica...
Many scientific applications are I/O intensive and have tremendous I/O requirements, including check...
Previous studies of application usage show that the per-formance of collective communications are cr...
Abstract W e describe a technique for speed-ing up the performance of global collective operations o...
Collective communications occupy 20-90% of total execution times in many MPI applications. In this p...
A majority of parallel applications achieve parallelism by partitioning data over multiple processo...
Further performance improvements of parallel simulation applications will not be reached by simply s...
Abstract Many parallel applications from scientific computing use collective MPI communication oper-...
Abstract—I/O performance is vital for most HPC applications especially those that generate a vast am...
International audienceThe historical gap between processing and data access speeds causes many appli...
Abstract—The well-known gap between relative CPU speeds and storage bandwidth results in the need fo...
The increasing number of cores per node has propelled the performance of leadershipscale systems fro...
Many parallel applications from scientific computing use MPI collective communication operations to ...
. The performance of collective communication is critical to the overall system performance. In gene...
Abstract. In this paper we present the design, implementation and evaluation of a runtime system bas...
Abstract—MPI collective I/O is a widely used I/O method that helps data-intensive scientific applica...
Many scientific applications are I/O intensive and have tremendous I/O requirements, including check...
Previous studies of application usage show that the per-formance of collective communications are cr...
Abstract W e describe a technique for speed-ing up the performance of global collective operations o...
Collective communications occupy 20-90% of total execution times in many MPI applications. In this p...
A majority of parallel applications achieve parallelism by partitioning data over multiple processo...
Further performance improvements of parallel simulation applications will not be reached by simply s...
Abstract Many parallel applications from scientific computing use collective MPI communication oper-...