International audienceIn the context of solving sparse linear systems, an ordering process partitions the matrix graph to minimize both fill-in and computational cost. We found that the ordering strategy used within supernodes might be enhanced to reduce the number of off-diagonal blocks, and then increases block sizes and kernel performance. This turns to be into the same complexity as the factorization algorithm, but allows for more efficient BLAS kernels. On the other side, supernodes that are too large need to be split to create more parallelism. The regular splitting strategy when applied locally impacts significantly the number of off-diagonal blocks and might have negative effect on the efficiency. In this talk, we present both a new...
The emergence of multicore architectures and highly scalable platforms motivates the development of ...
International audienceSparse direct solvers is a time consuming operation required by many scientifi...
It is important to have a fast, robust and scalable algorithm to solve a sparse linear system AX=B. ...
International audienceAmong the preprocessing steps of a sparse direct solver, reordering and block ...
Solving sparse linear systems is a problem that arises in many scientific applications, and sparse d...
International audienceThis paper presents two approaches using a Block Low-Rank (BLR) compression te...
International audienceThis paper presents two approaches using a Block Low-Rank (BLR) compression te...
International audienceWhen solving large sparse linear systems, both the amount of memory needed and...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
International audienceWe will discuss challenges in building clusters for the Block Low-Rank (BLR) a...
The present work presents a strategy to increase the arithmetic intensity of the solvers. Namely, we...
International audienceSolving sparse linear systems is a problem that arises in many scientific appl...
Sparse direct solvers play a vital role in large-scale high performance scientific and engineering c...
International audienceLow-rank compression techniques are very promising for reducing memory footpri...
The emergence of multicore architectures and highly scalable platforms motivates the development of ...
International audienceSparse direct solvers is a time consuming operation required by many scientifi...
It is important to have a fast, robust and scalable algorithm to solve a sparse linear system AX=B. ...
International audienceAmong the preprocessing steps of a sparse direct solver, reordering and block ...
Solving sparse linear systems is a problem that arises in many scientific applications, and sparse d...
International audienceThis paper presents two approaches using a Block Low-Rank (BLR) compression te...
International audienceThis paper presents two approaches using a Block Low-Rank (BLR) compression te...
International audienceWhen solving large sparse linear systems, both the amount of memory needed and...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
International audienceWe will discuss challenges in building clusters for the Block Low-Rank (BLR) a...
The present work presents a strategy to increase the arithmetic intensity of the solvers. Namely, we...
International audienceSolving sparse linear systems is a problem that arises in many scientific appl...
Sparse direct solvers play a vital role in large-scale high performance scientific and engineering c...
International audienceLow-rank compression techniques are very promising for reducing memory footpri...
The emergence of multicore architectures and highly scalable platforms motivates the development of ...
International audienceSparse direct solvers is a time consuming operation required by many scientifi...
It is important to have a fast, robust and scalable algorithm to solve a sparse linear system AX=B. ...