Irregularity arises in different contexts and causes different problems in parallel computing. We discuss some typical irregularity sources and we show how the related problems can be automatically solved by adopting high-level structured parallel programming techniques. In particular, by adopting those programming models only requiring the programmer to express the qualitative parallel behaviour of the application, and then automatically taking care of irregularity in the compiler or in the runtime support. Experimental results are presented, using either dedicated, homogeneous workstation clusters or undedicated, heterogeneous workstation networks, to demonstrate the effectiveness of the proposed approach
Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: ...
. Most data-parallel languages use arrays to support parallelism. This regular data structure allows...
In prior work, we have proposed techniques to extend the ease of shared-memory parallel programming ...
There are many important applications in computational fluid dynamics, circuit simulation and struct...
Abstract. A problem is irregular if its solution requires the computa-tion of some properties for ea...
The characteristics of irregular algorithms make a parallel implementation difficult, especially for...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Irregular computation problems underlie many important scientific applications. Although these probl...
A large class of scientific and engineering applications may be classified as irregular and loosely ...
In this paper we present our experience in implementing several irregular problems using a high-leve...
The Shared Virtual Memory (SVM) is an interesting layout that handles data storage, retrieval and co...
Parallel computing hardware is ubiquitous, ranging from cell-phones with multiple cores to super-com...
Parallel computing promises several orders of magnitude increase in our ability to solve realistic c...
Efficient implementations of irregular problems on vector and parallel architectures generally are h...
Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: ...
. Most data-parallel languages use arrays to support parallelism. This regular data structure allows...
In prior work, we have proposed techniques to extend the ease of shared-memory parallel programming ...
There are many important applications in computational fluid dynamics, circuit simulation and struct...
Abstract. A problem is irregular if its solution requires the computa-tion of some properties for ea...
The characteristics of irregular algorithms make a parallel implementation difficult, especially for...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Irregular computation problems underlie many important scientific applications. Although these probl...
A large class of scientific and engineering applications may be classified as irregular and loosely ...
In this paper we present our experience in implementing several irregular problems using a high-leve...
The Shared Virtual Memory (SVM) is an interesting layout that handles data storage, retrieval and co...
Parallel computing hardware is ubiquitous, ranging from cell-phones with multiple cores to super-com...
Parallel computing promises several orders of magnitude increase in our ability to solve realistic c...
Efficient implementations of irregular problems on vector and parallel architectures generally are h...
Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: ...
. Most data-parallel languages use arrays to support parallelism. This regular data structure allows...
In prior work, we have proposed techniques to extend the ease of shared-memory parallel programming ...