This work presents a novel strategy for the parallelization of applications containing sparse matrix references using the data-parallel paradigm. Our approach is a first step to converge to the automatic parallelization by reducing the number of directives on code. We have used the semantical relationship of vectors composing a high-level data structure to enhance the performance of the parallel code, applying a sparse privatization and a multi-loop analysis. We also study the building/updating of a sparse matrix at run-time, solving the problem of using pointers and some levels of indirections on the left hand side. A detailed analysis about several temporary buffers useful for sparse communications is described in this paper. The evaluati...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel...
International audienceWe discuss efficient shared memory parallelization of sparse matrix computatio...
AbstractThis work discusses the parallelization of an irregular scientific code, the transposition o...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
. There is a class of sparse matrix computations, such as direct solvers of systems of linear equati...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...
The sparse matrix--vector multiplication is an important kernel, but is hard to efficiently execute ...
Space-efficient data structures for sparse matrices typically yield programs in which not all data d...
Abstract—Dealing with both dense and sparse data in parallel environments usually leads to two diffe...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel...
International audienceWe discuss efficient shared memory parallelization of sparse matrix computatio...
AbstractThis work discusses the parallelization of an irregular scientific code, the transposition o...
We have developed a framework based on relational algebra for compiling efficient sparse matrix cod...
Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are ty...
International audienceIn this paper, we propose a generic method of automatic parallelization for sp...
. There is a class of sparse matrix computations, such as direct solvers of systems of linear equati...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse m...
Space-efficient data structures for sparse matrices are an important concept in numerical programmin...
Sparse computations are ubiquitous in computational codes, with the sparse matrix-vector (SpMV) mult...
The sparse matrix--vector multiplication is an important kernel, but is hard to efficiently execute ...
Space-efficient data structures for sparse matrices typically yield programs in which not all data d...
Abstract—Dealing with both dense and sparse data in parallel environments usually leads to two diffe...
Automatic program comprehension techniques have been shown to improve automatic parallelization of d...
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel...
International audienceWe discuss efficient shared memory parallelization of sparse matrix computatio...