Layout methods for dense and sparse data are often seen as two separate prob-lems with its own particular techniques. However, they are based on the same basic concepts. This paper studies how to integrate automatic data-layout and partition techniques for both dense and sparse data structures. In particular, we show how to include support for sparse matrices or graphs in Hitmap, a library for hierarchical-tiling and automatic mapping of arrays. The paper shows that is possible to offer a unique interface to work with both dense and sparse data structures, without losing significant performance. Thus, the programmer can use a single and homogeneous programing style, reducing the development effort and simplifying the use of sparse data stru...
Several methods have been proposed in the literature for the distribution of data on distributed mem...
The problem discussed in this thesis is distributed data partitioning and data re-ordering on many-c...
Sparse matrix-vector multiplication is the kernel for many scientific computations. Parallelizing th...
Layout methods for dense and sparse data are often seen as two separate problems with its own partic...
Abstract Layout methods for dense and sparse data are often seen as two separate problems with their...
Abstract—Dealing with both dense and sparse data in parallel environments usually leads to two diffe...
Parallel architectures with physically distributed memory providing computing cycles and large amoun...
Increased programmability for concurrent applications in distributed systems requires automatic supp...
An approach to programming distributed memory-parallel machines that has recently become popular is ...
Sparse matrices are first class objects in many VHLLs (very high level languages) used for scientifi...
Abstract—This paper studies the impact of using automatic data-layout techniques on the process of c...
Many problems appearing in scientific computing and other areas can be formulated as a graph parti...
International audienceWe investigate one dimensional partitioning of sparse matrices under a given o...
The goal of languages like Fortran D or High Performance Fortran (HPF) is to provide a simple yet ef...
Graph partitioning is often used for load balancing in parallel computing, but it is known that hype...
Several methods have been proposed in the literature for the distribution of data on distributed mem...
The problem discussed in this thesis is distributed data partitioning and data re-ordering on many-c...
Sparse matrix-vector multiplication is the kernel for many scientific computations. Parallelizing th...
Layout methods for dense and sparse data are often seen as two separate problems with its own partic...
Abstract Layout methods for dense and sparse data are often seen as two separate problems with their...
Abstract—Dealing with both dense and sparse data in parallel environments usually leads to two diffe...
Parallel architectures with physically distributed memory providing computing cycles and large amoun...
Increased programmability for concurrent applications in distributed systems requires automatic supp...
An approach to programming distributed memory-parallel machines that has recently become popular is ...
Sparse matrices are first class objects in many VHLLs (very high level languages) used for scientifi...
Abstract—This paper studies the impact of using automatic data-layout techniques on the process of c...
Many problems appearing in scientific computing and other areas can be formulated as a graph parti...
International audienceWe investigate one dimensional partitioning of sparse matrices under a given o...
The goal of languages like Fortran D or High Performance Fortran (HPF) is to provide a simple yet ef...
Graph partitioning is often used for load balancing in parallel computing, but it is known that hype...
Several methods have been proposed in the literature for the distribution of data on distributed mem...
The problem discussed in this thesis is distributed data partitioning and data re-ordering on many-c...
Sparse matrix-vector multiplication is the kernel for many scientific computations. Parallelizing th...