In order to explore further the capabilities of parallel computing architectures such as grids, clusters, multi-processors and more recently, clouds and multi-cores, an easy-to-use parallel language is an important challenging issue. From the programmer's point of view, OpenMP is very easy to use with its ability to support incremental parallelization, features for dynamically setting the number of threads and scheduling strategies. However, as initially designed for shared memory systems, OpenMP is usually limited on distributed memory systems to intra-nodes' computations. Many attempts have tried to port OpenMP on distributed systems. The most emerged approaches mainly focus on exploiting the capabilities of a special network architecture...
To provide increasing computational power for numerical simulations, supercomputers evolved and aren...
The solution of sparse systems of linear equations is at the heart of numerous applicationfields. Wh...
Following the loss of Dennard scaling, computing systems have become increasingly heterogeneous by t...
In order to explore further the capabilities of parallel computing architectures such as grids, clus...
OpenMP and MPI have become the standard tools to develop parallel programs on shared-memory and dist...
International audienceCheckpointing Aided Parallel Execution (CAPE) is the paradigm we developed to ...
International audienceCheckpointing-Aided Parallel Execution (CAPE) is a framework that is based on ...
The current parallel architectures integrate processors with many cores to shared memory growing and...
MUMPS is a parallel sparse direct solver, using message passing (MPI) for parallelism. In this repor...
With the advent of multicore and manycore processors as building blocks of HPC supercomputers, many ...
Hardware performance has been increasing through the addition of computing cores rather than through...
Composition du juryMonsieur Frédéric Desprez, Membre/PrésidentMonsieur Jean-François Méhaut, Membre/...
The continuous evolution of computer architectures has been an important driver of research in code ...
To provide increasing computational power for numerical simulations, supercomputers evolved and aren...
The solution of sparse systems of linear equations is at the heart of numerous applicationfields. Wh...
Following the loss of Dennard scaling, computing systems have become increasingly heterogeneous by t...
In order to explore further the capabilities of parallel computing architectures such as grids, clus...
OpenMP and MPI have become the standard tools to develop parallel programs on shared-memory and dist...
International audienceCheckpointing Aided Parallel Execution (CAPE) is the paradigm we developed to ...
International audienceCheckpointing-Aided Parallel Execution (CAPE) is a framework that is based on ...
The current parallel architectures integrate processors with many cores to shared memory growing and...
MUMPS is a parallel sparse direct solver, using message passing (MPI) for parallelism. In this repor...
With the advent of multicore and manycore processors as building blocks of HPC supercomputers, many ...
Hardware performance has been increasing through the addition of computing cores rather than through...
Composition du juryMonsieur Frédéric Desprez, Membre/PrésidentMonsieur Jean-François Méhaut, Membre/...
The continuous evolution of computer architectures has been an important driver of research in code ...
To provide increasing computational power for numerical simulations, supercomputers evolved and aren...
The solution of sparse systems of linear equations is at the heart of numerous applicationfields. Wh...
Following the loss of Dennard scaling, computing systems have become increasingly heterogeneous by t...