We present a new design pattern for high-performance parallel scientific software, named coalesced communication. This pattern allows for a struc-tured way to improve the communication performance through coalescence of multiple communication needs using two communication management com-ponents. We apply the design pattern to several simulations of a lattice-Boltzmann blood flow solver with streaming visualisation which engenders a reduction in the communication overhead of approximately 40%
Buffered Co-Scheduled (BCS) MPI proposes a new approach to design the communication libraries for la...
We present the design and implementation of InterComm, a framework to couple parallel components tha...
Parallel applications commonly face the problem of sitting idle while waiting for remote data to bec...
Multicomputer (distributed memory MIMD machines) have emerged as inexpensive, yet powerful parallel...
Communication coalescing is a static optimization that can reduce both communication frequency and r...
BCS MPI proposes a new approach to design the communication libraries for large scale parallel machi...
Parallelizing large sized problem in parallel systems has always been a challenge for programmer. Th...
Many parallel algorithms exhibit a hypercube communication topology. Such algorithms can easily be e...
textabstractA workable approach for modernizing existing software into parallel/distributed applicat...
In this book chapter, the authors discuss some important communication issues to obtain a highly sca...
The current trends in high performance computing show that large machines with tens of thousands of ...
languages, models of communication, irregular communications patterns, unstructured process composit...
This paper describes a number of optimizations that can be used to support the efficient execution o...
In this paper we describe one experiment in which a new coordination language, called Manifold, is u...
We present MATE, a new model for developing communication-tolerant scientific applications. MATE emp...
Buffered Co-Scheduled (BCS) MPI proposes a new approach to design the communication libraries for la...
We present the design and implementation of InterComm, a framework to couple parallel components tha...
Parallel applications commonly face the problem of sitting idle while waiting for remote data to bec...
Multicomputer (distributed memory MIMD machines) have emerged as inexpensive, yet powerful parallel...
Communication coalescing is a static optimization that can reduce both communication frequency and r...
BCS MPI proposes a new approach to design the communication libraries for large scale parallel machi...
Parallelizing large sized problem in parallel systems has always been a challenge for programmer. Th...
Many parallel algorithms exhibit a hypercube communication topology. Such algorithms can easily be e...
textabstractA workable approach for modernizing existing software into parallel/distributed applicat...
In this book chapter, the authors discuss some important communication issues to obtain a highly sca...
The current trends in high performance computing show that large machines with tens of thousands of ...
languages, models of communication, irregular communications patterns, unstructured process composit...
This paper describes a number of optimizations that can be used to support the efficient execution o...
In this paper we describe one experiment in which a new coordination language, called Manifold, is u...
We present MATE, a new model for developing communication-tolerant scientific applications. MATE emp...
Buffered Co-Scheduled (BCS) MPI proposes a new approach to design the communication libraries for la...
We present the design and implementation of InterComm, a framework to couple parallel components tha...
Parallel applications commonly face the problem of sitting idle while waiting for remote data to bec...