An efficient data structure is presented which supports general unstructured sparse matrix-vector multiplications on a Distributed Array of Processors (DAP). This approach seeks to reduce the inter-processor data movements and organises the operations in batches of massively parallel steps by a heuristic scheduling procedure performed on the host computer. The resulting data structure is of particular relevance to iterative schemes for solving linear systems. Performance results for matrices taken from well known Linear Programming (LP) test problems are presented and analysed
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
The multiplication of large spare matrices is a basic operation for many scientific and engineering ...
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time o...
This paper presents a novel implementation of parallel sparse matrix-matrix multiplication using dis...
AbstractThe matrix-vector multiplication operation is the kernel of most numerical algorithms.Typica...
Problems in the class of unstructured sparse matrix computations are characterized by highly irregul...
We design and develop a work-efficient multithreaded algorithm for sparse matrix-sparse vector multi...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
Abstract. Sparse matrix-vector multiplication forms the heart of iterative linear solvers used widel...
The most effective algorithms of solving large sparse linear system are Block Wiedemann and Block La...
The sparse matrix--vector multiplication is an important kernel, but is hard to efficiently execute ...
In this paper we present a new technique for sparse matrix multiplication on vector multiprocessors ...
Abstract In this paper, we study the sparse matrix-vector product (SMVP) distribution on a large sca...
Runtime specialization optimizes programs based on partial information available only at run time. I...
The matrix-vector product is one of the most important computational components of Krylov methods. T...
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
The multiplication of large spare matrices is a basic operation for many scientific and engineering ...
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time o...
This paper presents a novel implementation of parallel sparse matrix-matrix multiplication using dis...
AbstractThe matrix-vector multiplication operation is the kernel of most numerical algorithms.Typica...
Problems in the class of unstructured sparse matrix computations are characterized by highly irregul...
We design and develop a work-efficient multithreaded algorithm for sparse matrix-sparse vector multi...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
Abstract. Sparse matrix-vector multiplication forms the heart of iterative linear solvers used widel...
The most effective algorithms of solving large sparse linear system are Block Wiedemann and Block La...
The sparse matrix--vector multiplication is an important kernel, but is hard to efficiently execute ...
In this paper we present a new technique for sparse matrix multiplication on vector multiprocessors ...
Abstract In this paper, we study the sparse matrix-vector product (SMVP) distribution on a large sca...
Runtime specialization optimizes programs based on partial information available only at run time. I...
The matrix-vector product is one of the most important computational components of Krylov methods. T...
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
The multiplication of large spare matrices is a basic operation for many scientific and engineering ...
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time o...