The ability to represent, manipulate and optimize data placement and movement between processors in a distributed address space machine is crucial in allowing compilers to generate efficient code. Data placement is embodied in the concept of data ownership. Data movement can include not just the transfer of data values but the transfer of ownership as well. However, most existing compilers for distributed address space machines either represent these notions in a language- or machine-dependent manner, or represent data or ownership transfer implicitly. In this paper we describe XDP, a set of intermediate language extensions for representing and manipulating data and ownership transfers explicitly in a compiler. XDP is supported by a set of ...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
A key problem in retargeting a compiler is to map the compiler's intermediate representation to the ...
Efficient task migration is an important feature in parallel and distributed programs, in particular...
We present a unified approach to locality optimization that employs both data and control transforma...
In parallel programs the most important improvements in execution times can be achieved by the optim...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
AbstractCoordination languages for parallel and distributed systems specify mechanisms for creating ...
The cost of moving data is becoming a dominant factor for performance and energy efficiency in high...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
Many parallel languages presume a shared address space in which any portion of a computation can acc...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to im-pr...
Effective memory hierarchy utilization is critical to the performance of modern multiprocessor archi...
A variety of historically-proven computer languages have recently been extended to support parallel ...
Distributed systems receive much attention because parallelism and scalability are achieved with rel...
Traditionally, languages were created and intended for sequential machines and were, naturally, sequ...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
A key problem in retargeting a compiler is to map the compiler's intermediate representation to the ...
Efficient task migration is an important feature in parallel and distributed programs, in particular...
We present a unified approach to locality optimization that employs both data and control transforma...
In parallel programs the most important improvements in execution times can be achieved by the optim...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
AbstractCoordination languages for parallel and distributed systems specify mechanisms for creating ...
The cost of moving data is becoming a dominant factor for performance and energy efficiency in high...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
Many parallel languages presume a shared address space in which any portion of a computation can acc...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to im-pr...
Effective memory hierarchy utilization is critical to the performance of modern multiprocessor archi...
A variety of historically-proven computer languages have recently been extended to support parallel ...
Distributed systems receive much attention because parallelism and scalability are achieved with rel...
Traditionally, languages were created and intended for sequential machines and were, naturally, sequ...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
A key problem in retargeting a compiler is to map the compiler's intermediate representation to the ...
Efficient task migration is an important feature in parallel and distributed programs, in particular...