The solution of sparse systems of linear equations is at the heart of numerous applicationfields. While the amount of computational resources in modern architectures increases and offersnew perspectives, the size of the problems arising in today’s numerical simulation applicationsalso grows very much. Exploiting modern architectures to solve very large problems efficiently isthus a challenge, from both a theoretical and an algorithmic point of view. The aim of this thesisis to address the scalability of sparse direct solvers based on multifrontal methods in parallelasynchronous environments.In the first part of this thesis, we focus on exploiting multi-threaded parallelism on sharedmemoryarchitectures. A variant of the Geist-Ng algorithm is...
Following the loss of Dennard scaling, computing systems have become increasingly heterogeneous by t...
MUMPS is a parallel sparse direct solver, using message passing (MPI) for parallelism. In this repor...
In order to achieve performance gains, computers have evolved to multi-core and many-core platforms ...
The solution of sparse systems of linear equations is at the heart of numerous applicationfields. Wh...
We consider the solution of very large sparse systems of linear equations on parallel architectures....
We consider the solution of very large sparse systems of linear equations on parallel architectures....
International audienceWe introduce shared-memory parallelism in a parallel distributed-memory solver...
We study the adaptation of a parallel distributed-memory solver towards a shared-memory code, target...
International audienceWe describe how to enhance parallelism in an asynchronous distributed-memory e...
To solve sparse systems of linear equations, multifrontal methods rely on dense partial LU decomposi...
Direct methods for the solution of sparse systems of linear equations are used in a wide range of nu...
Advances in computational power have led to many developments in science and its applications. Solvi...
The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems involv...
The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems. This ...
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille par des méth...
Following the loss of Dennard scaling, computing systems have become increasingly heterogeneous by t...
MUMPS is a parallel sparse direct solver, using message passing (MPI) for parallelism. In this repor...
In order to achieve performance gains, computers have evolved to multi-core and many-core platforms ...
The solution of sparse systems of linear equations is at the heart of numerous applicationfields. Wh...
We consider the solution of very large sparse systems of linear equations on parallel architectures....
We consider the solution of very large sparse systems of linear equations on parallel architectures....
International audienceWe introduce shared-memory parallelism in a parallel distributed-memory solver...
We study the adaptation of a parallel distributed-memory solver towards a shared-memory code, target...
International audienceWe describe how to enhance parallelism in an asynchronous distributed-memory e...
To solve sparse systems of linear equations, multifrontal methods rely on dense partial LU decomposi...
Direct methods for the solution of sparse systems of linear equations are used in a wide range of nu...
Advances in computational power have led to many developments in science and its applications. Solvi...
The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems involv...
The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems. This ...
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille par des méth...
Following the loss of Dennard scaling, computing systems have become increasingly heterogeneous by t...
MUMPS is a parallel sparse direct solver, using message passing (MPI) for parallelism. In this repor...
In order to achieve performance gains, computers have evolved to multi-core and many-core platforms ...