International audienceIn this talk, we present the PaStiX sparse supernodal solver, using hierarchical compression to reduce the burden on large blocks appearing during the nested dissection process. We compare the numerical stability, and the performance in terms of memory consumption and time to solution of different approaches by selecting when the compression of the factorized matrix occurs. In order to improve the efficiency of the sparse update kernel for both BLR (block low rank) and HODLR (hierarchically off-diagonal low-rank), we investigate the BDLR (boundary distance low-rank) method to preselect rows and columns in the low-rank approximation algorithm
International audienceWe will discuss challenges in building clusters for the Block Low-Rank (BLR) a...
International audienceWhen solving large sparse linear systems, both the amount of memory needed and...
Solving sparse linear systems is a problem that arises in many scientific applications, and sparse d...
International audienceIn this talk, we present the PaStiX sparse supernodal solver, using hierarchic...
International audienceIn this talk, we present the PaStiX sparse supernodal solver, using hierarchic...
This paper presents two approaches using a Block Low-Rank (BLR) compression technique to reduce the ...
International audienceThis paper presents two approaches using a Block Low-Rank (BLR) compression te...
International audienceLow-rank compression techniques are very promising for reducing memory footpri...
Through the recent improvements toward exascale supercomputer systems, huge computations can be perf...
International audienceIn this talk, we describe a preliminary fast direct solver using HODLR library...
International audienceIn the context of solving sparse linear systems, an ordering process partition...
This paper presents two approaches using a Block Low-Rank (BLR) compressiontechnique to reduce the m...
Hierarchically semiseparable (HSS) matrix algorithms are emerging techniques in constructing the sup...
International audienceSparse direct solvers using Block Low-Rank compression have been proven effici...
National audienceIn this talk, we present the use of PaStiX sparse direct solver in a Schwarz method...
International audienceWe will discuss challenges in building clusters for the Block Low-Rank (BLR) a...
International audienceWhen solving large sparse linear systems, both the amount of memory needed and...
Solving sparse linear systems is a problem that arises in many scientific applications, and sparse d...
International audienceIn this talk, we present the PaStiX sparse supernodal solver, using hierarchic...
International audienceIn this talk, we present the PaStiX sparse supernodal solver, using hierarchic...
This paper presents two approaches using a Block Low-Rank (BLR) compression technique to reduce the ...
International audienceThis paper presents two approaches using a Block Low-Rank (BLR) compression te...
International audienceLow-rank compression techniques are very promising for reducing memory footpri...
Through the recent improvements toward exascale supercomputer systems, huge computations can be perf...
International audienceIn this talk, we describe a preliminary fast direct solver using HODLR library...
International audienceIn the context of solving sparse linear systems, an ordering process partition...
This paper presents two approaches using a Block Low-Rank (BLR) compressiontechnique to reduce the m...
Hierarchically semiseparable (HSS) matrix algorithms are emerging techniques in constructing the sup...
International audienceSparse direct solvers using Block Low-Rank compression have been proven effici...
National audienceIn this talk, we present the use of PaStiX sparse direct solver in a Schwarz method...
International audienceWe will discuss challenges in building clusters for the Block Low-Rank (BLR) a...
International audienceWhen solving large sparse linear systems, both the amount of memory needed and...
Solving sparse linear systems is a problem that arises in many scientific applications, and sparse d...