Parallel computers of the future will require a memory model which offers a global address space to the programmer, while performing equally well under various system configurations. We present a logically shared and physically distributed memory to match both requirements. This paper introduces the memory system used in the ADAM coarse-grain dataflow machine which preserves scalability by tolerating latency and offers programmability through its object-based structure. We show how to support data objects of arbitrary size and different access bandwidth and latency characteristics, and present a possible implementation of this model. The proposed system is evaluated by analysis of the bandwidth and latency characteristics of the three diffe...
Memory access time is a key factor limiting the performance of large-scale, shared-memory multiproce...
This paper discusses some of the issues involved in implementing a shared-address space programming ...
Dataflow architecture as a concept has been around for a long time for parallel computation. In data...
Dataflow-based fine-grain parallel data-structures provide high-level abstraction to easily write pr...
Distributed memory multiprocessor architectures offer enormous computational power, by exploiting th...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
This paper describes our experiences with the development of a Distributed Shared Memory (DSM) based...
The advent of gigabit network technologies has made it possible to combine sets of uni- and multipr...
Increased programmability for concurrent applications in distributed systems requires automatic supp...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
Most methods for programming loosely-coupled systems are based on message-passing. Recently, however...
Shared-memory multiprocessors should be designed to provide for correct execution of programs constr...
Although large-scale shared-memory multiprocessors are believed to be easier to program than disjoin...
Multiprocessors with shared memory are considered more general and easier to program than message-pa...
Shared memory models have been criticized for years for failing to model essential realities of para...
Memory access time is a key factor limiting the performance of large-scale, shared-memory multiproce...
This paper discusses some of the issues involved in implementing a shared-address space programming ...
Dataflow architecture as a concept has been around for a long time for parallel computation. In data...
Dataflow-based fine-grain parallel data-structures provide high-level abstraction to easily write pr...
Distributed memory multiprocessor architectures offer enormous computational power, by exploiting th...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
This paper describes our experiences with the development of a Distributed Shared Memory (DSM) based...
The advent of gigabit network technologies has made it possible to combine sets of uni- and multipr...
Increased programmability for concurrent applications in distributed systems requires automatic supp...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
Most methods for programming loosely-coupled systems are based on message-passing. Recently, however...
Shared-memory multiprocessors should be designed to provide for correct execution of programs constr...
Although large-scale shared-memory multiprocessors are believed to be easier to program than disjoin...
Multiprocessors with shared memory are considered more general and easier to program than message-pa...
Shared memory models have been criticized for years for failing to model essential realities of para...
Memory access time is a key factor limiting the performance of large-scale, shared-memory multiproce...
This paper discusses some of the issues involved in implementing a shared-address space programming ...
Dataflow architecture as a concept has been around for a long time for parallel computation. In data...