Abstract: High performance computing (HPC) architectures are specialized machines which can reach their peak performance only if they are programmed in a way which exploits the idiosyncrasies of the architecture. An important feature of most such archi-tectures is a physically distributed memory, resulting in the requirement to take data locality into account independent of the memory model oered to the user. In this pa-per we discuss various ways for managing data distribution in a program, comparing in particular the low-level message-passing approach to that in High Performance Fortran (HPF) and other high performance languages. The main part of the paper outlines a method for the specication of data distribution semantics for distribute...
We present algorithms for the transportation of data in parallel and distributed systems that would ...
Data Partitioning and mapping is one of the most important steps of in writing a parallel program; e...
We discuss some techniques for preserving locality of reference in index spaces when mapped to memor...
High performance computing (HPC) architectures are specialized machines which can reach their peak p...
This paper presents a reusable design of a data distribution frame-work for data parallel high perfo...
. This paper presents HPF+, an optimized version of High Performance Fortran (HPF) for advanced indu...
Massively Parallel Processor systems provide the required computational power to solve most large sc...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of da...
The cost of data movement has always been an important concern in high performance computing (HPC) s...
Massively Parallel Processor systems provide the required computational power to solve most large sc...
A reference architecture is defined for an object-oriented implementation of domains, arrays, and di...
This paper describes the design of the Fortran90D/HPF compiler, a source-to-source parallel compiler...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
Fortran 90D/HPF is a data parallel language with special directives to enable users to specify data ...
We present algorithms for the transportation of data in parallel and distributed systems that would ...
Data Partitioning and mapping is one of the most important steps of in writing a parallel program; e...
We discuss some techniques for preserving locality of reference in index spaces when mapped to memor...
High performance computing (HPC) architectures are specialized machines which can reach their peak p...
This paper presents a reusable design of a data distribution frame-work for data parallel high perfo...
. This paper presents HPF+, an optimized version of High Performance Fortran (HPF) for advanced indu...
Massively Parallel Processor systems provide the required computational power to solve most large sc...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of da...
The cost of data movement has always been an important concern in high performance computing (HPC) s...
Massively Parallel Processor systems provide the required computational power to solve most large sc...
A reference architecture is defined for an object-oriented implementation of domains, arrays, and di...
This paper describes the design of the Fortran90D/HPF compiler, a source-to-source parallel compiler...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
Fortran 90D/HPF is a data parallel language with special directives to enable users to specify data ...
We present algorithms for the transportation of data in parallel and distributed systems that would ...
Data Partitioning and mapping is one of the most important steps of in writing a parallel program; e...
We discuss some techniques for preserving locality of reference in index spaces when mapped to memor...