We have developed a framework based on relational algebra for compiling efficient sparse matrix code from dense DO-ANY loops and a specification of the representation of the sparse matrix. In this paper, we show how this framework can be used to generate parallel code, and present experimental data that demonstrates that the code generated by our Bernoulli compiler achieves performance competitive with that of hand-written codes for important computational kernels
This paper describes two portable packages for general-purpose sparse matrix computations: SPARSKIT...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
Sparse matrix computations are ubiquitous in computational science. However, the development of high...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (...
This work presents a novel strategy for the parallelization of applications containing sparse matrix...
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
Space-efficient data structures for sparse matrices typically yield programs in which not all data d...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze ...
We describe a novel approach to sparse {\em and} dense SPMD code generation: we view arrays (sparse ...
We describe a novel approach to sparse and dense SPMD code generation: we view arrays (sparse and d...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
This paper describes two portable packages for general-purpose sparse matrix computations: SPARSKIT...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
Sparse matrix computations are ubiquitous in computational science. However, the development of high...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
We present compiler technology for synthesizing sparse matrix code from (i) dense matrix code, and (...
This work presents a novel strategy for the parallelization of applications containing sparse matrix...
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
Space-efficient data structures for sparse matrices typically yield programs in which not all data d...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze ...
We describe a novel approach to sparse {\em and} dense SPMD code generation: we view arrays (sparse ...
We describe a novel approach to sparse and dense SPMD code generation: we view arrays (sparse and d...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
This paper describes two portable packages for general-purpose sparse matrix computations: SPARSKIT...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...