International audienceIn this paper, we present original techniques for the generation and the efficient execution of communication code for parallel loop nests, in the framework of the compilation of HPFf-like languages on distributed memory parallel computers. The problem is studied through its two components: on one hand, the generation of a fast description of communication sets by the compiler and, on the other hand, the implementation of efficient transfers at run-time. Both take into account the characteristics of the distributed array management, notably the memory contiguity
This paper presents a compiling technique to generate parallel code with explicit local communicatio...
Many computations can be structured as sets of communicating data-parallel tasks. Individual tasks m...
[[abstract]]©1996 IEEE-The synthesis of consecutive array operations or array expressions into a com...
International audienceIn this paper, we present original techniques for the generation and the effic...
We present new techniques for compilation of arbitrarily nested loops with affine dependences for di...
Compilation of parallel loops is one of the most important parts in parallel compilation and optimiz...
[[abstract]]An increasing number of programming languages, such as Fortran 90, HPF, and APL, provide...
[[abstract]]In distributed memory multicomputers, local memory accesses are much faster than those i...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19...
Distributed-memory message-passing machines deliver scalable perfor-mance but are difficult to progr...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
: If the iterations of a loop nest cannot be partitioned into independent tasks, data communication ...
Applications with varying array access patterns require to dynamically change array mappings on dist...
Distributed memory multiprocessors are increasingly being used to provide high performance for advan...
This paper presents a compiling technique to generate parallel code with explicit local communicatio...
This paper presents a compiling technique to generate parallel code with explicit local communicatio...
Many computations can be structured as sets of communicating data-parallel tasks. Individual tasks m...
[[abstract]]©1996 IEEE-The synthesis of consecutive array operations or array expressions into a com...
International audienceIn this paper, we present original techniques for the generation and the effic...
We present new techniques for compilation of arbitrarily nested loops with affine dependences for di...
Compilation of parallel loops is one of the most important parts in parallel compilation and optimiz...
[[abstract]]An increasing number of programming languages, such as Fortran 90, HPF, and APL, provide...
[[abstract]]In distributed memory multicomputers, local memory accesses are much faster than those i...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19...
Distributed-memory message-passing machines deliver scalable perfor-mance but are difficult to progr...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
: If the iterations of a loop nest cannot be partitioned into independent tasks, data communication ...
Applications with varying array access patterns require to dynamically change array mappings on dist...
Distributed memory multiprocessors are increasingly being used to provide high performance for advan...
This paper presents a compiling technique to generate parallel code with explicit local communicatio...
This paper presents a compiling technique to generate parallel code with explicit local communicatio...
Many computations can be structured as sets of communicating data-parallel tasks. Individual tasks m...
[[abstract]]©1996 IEEE-The synthesis of consecutive array operations or array expressions into a com...