This paper examines the performance of a suite of HPF applications on a network of workstations using two different compilation approaches: generating explicit message-passing code, and generating code for a shared address space provided by a finegrain distributed shared memory system (DSM). Preliminary experiments indicate that the DSM approach performs with usually a small slowdown compared to the message passing approach on regular programs, yet enables efficient execution of non-regular program
This paper describes the design of a compiler which can translate ont-of-core programs written in a ...
Applications with varying array access patterns require to dynamically change array mappings on dist...
. Data-parallel languages, in particular HPF, provide a highlevel view of operators overs parallel d...
Unlike compiler-generated message-passing code, the coherence mechanisms in shared-memory systems wo...
. High Performance Fortran (hpf) is a data-parallel Fortran for Distributed Memory Multiprocessors. ...
Current parallelizing compilers for message-passing machines only support a limited class of data-pa...
Current parallelizing compilers for message-passing machines only support a limited class of data-pa...
Abstract: High performance computing (HPC) architectures are specialized machines which can reach th...
High Performance Fortran (HPF) does not allow ecient expression of mixed task/data-parallel computat...
High performance computing (HPC) architectures are specialized machines which can reach their peak p...
In this paper we evaluate the use of software distributed shared memory (DSM) on a message passing m...
Clusters of Symmetrical Multiprocessors (SMPs) have recently become very popular as low cost, high p...
We consider the problem of parallel programming in heterogeneous local area networks which connect s...
We compare two paradigms for parallel programming on networks of workstations: message passing and d...
A software distributed shared memory (DSM) system allows shared memory parallel programs to execute ...
This paper describes the design of a compiler which can translate ont-of-core programs written in a ...
Applications with varying array access patterns require to dynamically change array mappings on dist...
. Data-parallel languages, in particular HPF, provide a highlevel view of operators overs parallel d...
Unlike compiler-generated message-passing code, the coherence mechanisms in shared-memory systems wo...
. High Performance Fortran (hpf) is a data-parallel Fortran for Distributed Memory Multiprocessors. ...
Current parallelizing compilers for message-passing machines only support a limited class of data-pa...
Current parallelizing compilers for message-passing machines only support a limited class of data-pa...
Abstract: High performance computing (HPC) architectures are specialized machines which can reach th...
High Performance Fortran (HPF) does not allow ecient expression of mixed task/data-parallel computat...
High performance computing (HPC) architectures are specialized machines which can reach their peak p...
In this paper we evaluate the use of software distributed shared memory (DSM) on a message passing m...
Clusters of Symmetrical Multiprocessors (SMPs) have recently become very popular as low cost, high p...
We consider the problem of parallel programming in heterogeneous local area networks which connect s...
We compare two paradigms for parallel programming on networks of workstations: message passing and d...
A software distributed shared memory (DSM) system allows shared memory parallel programs to execute ...
This paper describes the design of a compiler which can translate ont-of-core programs written in a ...
Applications with varying array access patterns require to dynamically change array mappings on dist...
. Data-parallel languages, in particular HPF, provide a highlevel view of operators overs parallel d...