We have implemented the Bernoulli generic programming system for sparse matrix computations. What distinguishes it from existing generic sparse matrix libraries is that we use (i) a high-level matrix abstraction for writing generic matrix programs, (ii) a low-level matrix abstraction for describing the indexing structure and properties of sparse matrices formats, and (iii) restructuring compiler technology to transform the high-level generic programs into concrete implementations that efficiently access sparse matrices using the low-level abstraction. This paper describes the Bernoulli Generic Matrix Library (BGML). The BGML is the C++ implementation of these high-level and low-level abstractions. Within our system, it serves as the ``...
We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (...
SparseTool is a collection of simple and efficient classes for manipulating large vectors and large ...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
We describe an object oriented sparse matrix library in C++ designed for portability and performance...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Sparse matrix computations are ubiquitous in computational science. However, the development of high...
We discuss the interface design for the Sparse Basic Linear Algebra Subprograms (BLAS), the kernels ...
We present a generic programming methodology for expressing data structures and algorithms for high-...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
The multiplication of a sparse matrix with a dense vector is a performance critical computational ke...
The goal of the LAPACK project is to provide efficient and portable software for dense numerical lin...
Sparse matrix computations arise in many scientific computing problems and for some (e.g.: iterative...
I create a library for storing matrices and working with matrices in this bachelor’s thesis. In this...
We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (...
SparseTool is a collection of simple and efficient classes for manipulating large vectors and large ...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
We describe an object oriented sparse matrix library in C++ designed for portability and performance...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Sparse matrix computations are ubiquitous in computational science. However, the development of high...
We discuss the interface design for the Sparse Basic Linear Algebra Subprograms (BLAS), the kernels ...
We present a generic programming methodology for expressing data structures and algorithms for high-...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
The multiplication of a sparse matrix with a dense vector is a performance critical computational ke...
The goal of the LAPACK project is to provide efficient and portable software for dense numerical lin...
Sparse matrix computations arise in many scientific computing problems and for some (e.g.: iterative...
I create a library for storing matrices and working with matrices in this bachelor’s thesis. In this...
We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (...
SparseTool is a collection of simple and efficient classes for manipulating large vectors and large ...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...