In this paper we propose an API to pause and resume task execution depending on external events. We leverage this generic API to improve the interoperability between MPI synchronous communication primitives and tasks. When an MPI operation blocks, the task running is paused so that the runtime system can schedule a new task on the core that became idle. Once the MPI operation is completed, the paused task is put again on the runtime system's ready queue. We expose our proposal through a new MPI threading level which we implement through two approaches. The first approach is an MPI wrapper library that works with any MPI implementation by intercepting MPI synchronous calls, implementing them on top of their asynchronous counterparts. In thi...
The current MPI model defines a one-to-one relationship between MPI processes and MPI ranks. This mo...
Dynamic verication methods are the natural choice for for-mally verifying real world programs when m...
Understanding the behavior of parallel applications that use the Message Passing Interface (MPI) is ...
In this paper we propose an API to pause and resume task execution depending on external events. We ...
In this paper we present the Task-Aware MPI library (TAMPI) that integrates both blocking and non-bl...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
International audienceWhen aiming for large scale parallel computing, waiting time due to network la...
International audienceMany-core and heterogeneous architectures now require programmers to compose m...
Editors: Michael Klemm; Bronis R. de Supinski et al.International audienceHeterogeneous supercompute...
In this report we describe how to improve communication time of MPI parallel applications with the u...
We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic proce...
Abstract. Dynamic verication methods are the natural choice for for-mally verifying real world progr...
The new generation of parallel applications are complex, involve simulation of dynamically varying s...
Abstract. The Message Passing Interface is one of the most well known parallel programming libraries...
Abstract Fine-Grain MPI (FG-MPI) extends the execution model of MPI to allow for interleaved executi...
The current MPI model defines a one-to-one relationship between MPI processes and MPI ranks. This mo...
Dynamic verication methods are the natural choice for for-mally verifying real world programs when m...
Understanding the behavior of parallel applications that use the Message Passing Interface (MPI) is ...
In this paper we propose an API to pause and resume task execution depending on external events. We ...
In this paper we present the Task-Aware MPI library (TAMPI) that integrates both blocking and non-bl...
Hybrid programming combining task-based and message-passing models is an increasingly popular techni...
International audienceWhen aiming for large scale parallel computing, waiting time due to network la...
International audienceMany-core and heterogeneous architectures now require programmers to compose m...
Editors: Michael Klemm; Bronis R. de Supinski et al.International audienceHeterogeneous supercompute...
In this report we describe how to improve communication time of MPI parallel applications with the u...
We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic proce...
Abstract. Dynamic verication methods are the natural choice for for-mally verifying real world progr...
The new generation of parallel applications are complex, involve simulation of dynamically varying s...
Abstract. The Message Passing Interface is one of the most well known parallel programming libraries...
Abstract Fine-Grain MPI (FG-MPI) extends the execution model of MPI to allow for interleaved executi...
The current MPI model defines a one-to-one relationship between MPI processes and MPI ranks. This mo...
Dynamic verication methods are the natural choice for for-mally verifying real world programs when m...
Understanding the behavior of parallel applications that use the Message Passing Interface (MPI) is ...