This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse matrix codes and evaluates its performance in the context of wavefront parallellism. Sparse computations incorporate indirect memory accesses such as x[col[j]] whose memory locations cannot be determined until runtime. The key contributions of this paper are two compile-time techniques for significantly reducing the overhead of runtime dependence testing: (1) identifying new equality constraints that result in more efficient runtime inspectors, and (2) identifying subset relations between dependence constraints such that one dependence test subsumes another one that is therefore eliminated. New equality constraints discovery is enabled by taki...
Automatic scheduling in parallel/distributed systems for coarse grained irregular problems such as s...
Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze ...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
This paper presents a compiler and runtime framework for parallelizing sparse matrix computations th...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
Space-efficient data structures for sparse matrices typically yield programs in which not all data d...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Abstract. We present compiler technology for generating sparse matrix code from (i) dense matrix cod...
This work presents a novel strategy for the parallelization of applications containing sparse matrix...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
dissertationSparse matrix codes are found in numerous applications ranging from iterative numerical ...
Data dependence testing is the basic step in detecting loop level parallelism in numerical programs....
Automatic scheduling in parallel/distributed systems for coarse grained irregular problems such as s...
Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze ...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
This paper presents a compiler and runtime framework for parallelizing sparse matrix computations th...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
Space-efficient data structures for sparse matrices typically yield programs in which not all data d...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Abstract. We present compiler technology for generating sparse matrix code from (i) dense matrix cod...
This work presents a novel strategy for the parallelization of applications containing sparse matrix...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
dissertationSparse matrix codes are found in numerous applications ranging from iterative numerical ...
Data dependence testing is the basic step in detecting loop level parallelism in numerical programs....
Automatic scheduling in parallel/distributed systems for coarse grained irregular problems such as s...
Standard restructuring compiler tools are based on polyhedral algebra and cannot be used to analyze ...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...