Space-efficient data structures for sparse matrices typically yield programs in which not all data dependencies can be determined at compile time. Automatic parallelization of such codes is usually done at run time, e.g. by applying the inspector-executor technique, incurring tremendous overhead. -- Program comprehension techniques have been shown to improve automatic parallelization of dense matrix computations. We investigate how this approach can be generalized to sparse matrix codes. We propose a speculative program comprehension and parallelization method. Placement of parallelized run-time tests is supported by a static data flow analysis framework
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Runtime specialization optimizes programs based on partial infor-mation available only at run time. ...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
This work presents a novel strategy for the parallelization of applications containing sparse matrix...
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
Runtime specialization optimizes programs based on partial information available only at run time. I...
This paper presents a compiler and runtime framework for parallelizing sparse matrix computations th...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Runtime specialization optimizes programs based on partial infor-mation available only at run time. ...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
This work presents a novel strategy for the parallelization of applications containing sparse matrix...
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear ...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
Runtime specialization optimizes programs based on partial information available only at run time. I...
This paper presents a compiler and runtime framework for parallelizing sparse matrix computations th...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Sparse matrices are stored in compressed formats in which zeros are not stored explicitly. Writing h...
Runtime specialization optimizes programs based on partial infor-mation available only at run time. ...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...