Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems program-mer...
Clusters of workstations are often claimed to be a good platform for parallel processing, especially...
The ability to exploit parallel concepts on a large scale has only recently been made possible throu...
International audience[Excerpt from the introduction] The spreading of Distributed Memory Parallel C...
Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. U...
Orca is a language for programming parallel applications on distributed computing systems. Although ...
We investigate the capabilities and shortcomings of Orca, a Modulalike parallel programming language...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
Building the hardware for a high-performance distributed computer system is a lot easier than buildi...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
Orca is a portable, object-based distributed shared memory (DSM) system. This article studies and ev...
Protected object types are one of three major extensions to Ada 83 proposed by Ada 9X. This language...
The programming of parallel and distributed applications is difficult. The proliferation of net wor...
Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the s...
ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any S...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/88...
Clusters of workstations are often claimed to be a good platform for parallel processing, especially...
The ability to exploit parallel concepts on a large scale has only recently been made possible throu...
International audience[Excerpt from the introduction] The spreading of Distributed Memory Parallel C...
Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. U...
Orca is a language for programming parallel applications on distributed computing systems. Although ...
We investigate the capabilities and shortcomings of Orca, a Modulalike parallel programming language...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
Building the hardware for a high-performance distributed computer system is a lot easier than buildi...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
Orca is a portable, object-based distributed shared memory (DSM) system. This article studies and ev...
Protected object types are one of three major extensions to Ada 83 proposed by Ada 9X. This language...
The programming of parallel and distributed applications is difficult. The proliferation of net wor...
Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the s...
ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any S...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/88...
Clusters of workstations are often claimed to be a good platform for parallel processing, especially...
The ability to exploit parallel concepts on a large scale has only recently been made possible throu...
International audience[Excerpt from the introduction] The spreading of Distributed Memory Parallel C...