Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situa...
The problem of exploiting the parallelism available in a program to efficiently employ the resources...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to impro...
Global addressing of shared data simplifies parallel programming and complements message passing mod...
Programming nonshared memory systems is more difficult than programming shared memory systems, since...
Programming nonshared memory systems is more difficult than programming shared memory systems, in pa...
A compiler and runtime support mechanism is described and demonstrated. The methods presented are ca...
The goal of the research described is to develop flexible language constructs for writing large data...
Scalable shared-memory multiprocessor systems are typically NUMA (nonuniform memory access) machines...
Outlined here are two methods which we believe will play an important role in any distributed memory...
Exploiting the full performance potential of distributed memory machines requires a careful distribu...
This thesis argues that a modular, source-to-source translation system for distributed-shared memory...
Nonshared-memory parallel computers promise scalable performance for scientific computing needs. Unf...
Distributed-memory programs are often written using a global address space: any process can name any...
This paper describes techniques for translating out-of-core programs written in a data parallel lang...
Partitioned global address space (PGAS) languages like UPC or Fortran provide a global name space to...
The problem of exploiting the parallelism available in a program to efficiently employ the resources...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to impro...
Global addressing of shared data simplifies parallel programming and complements message passing mod...
Programming nonshared memory systems is more difficult than programming shared memory systems, since...
Programming nonshared memory systems is more difficult than programming shared memory systems, in pa...
A compiler and runtime support mechanism is described and demonstrated. The methods presented are ca...
The goal of the research described is to develop flexible language constructs for writing large data...
Scalable shared-memory multiprocessor systems are typically NUMA (nonuniform memory access) machines...
Outlined here are two methods which we believe will play an important role in any distributed memory...
Exploiting the full performance potential of distributed memory machines requires a careful distribu...
This thesis argues that a modular, source-to-source translation system for distributed-shared memory...
Nonshared-memory parallel computers promise scalable performance for scientific computing needs. Unf...
Distributed-memory programs are often written using a global address space: any process can name any...
This paper describes techniques for translating out-of-core programs written in a data parallel lang...
Partitioned global address space (PGAS) languages like UPC or Fortran provide a global name space to...
The problem of exploiting the parallelism available in a program to efficiently employ the resources...
The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to impro...
Global addressing of shared data simplifies parallel programming and complements message passing mod...