We investigate the capabilities and shortcomings of Orca, a Modulalike parallel programming language supporting shared data objects on distributed memory platforms, by examining implementations of five nontrivial parallel applications: game tree searching, active chart parsing, image skeletonization, simulation of a chaotic predator/prey system, and polygon overlay. 1 Introduction Twenty years ago, a small number of visionaries believed that massive parallelism was the future of high-performance computing. Five years ago, that view had become commonplace; today, users are increasingly sceptical of such claims. The main reason is that parallel computers have remained very difficult to program. Many new programming systems have been develope...
ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any S...
Rapid evolution of computer processor architectures has spawned multiple programming languages and s...
It is today's general wisdom that the productive use of parallel architectures depends cruciall...
Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. U...
Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. U...
Orca is a language for programming parallel applications on distributed computing systems. Although ...
Orca is a portable, object-based distributed shared memory (DSM) system. This article studies and ev...
Building the hardware for a high-performance distributed computer system is a lot easier than buildi...
Active Chart Parsing is an efficient strategy used to generate all possible parsings of a sentence g...
Parallel programming is widely acknowledged to be more difficult than sequential programming. One re...
2Writing parallel programs is difficult. Besides the inherent difficulties associ-ated with writing ...
Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the s...
We survey parallel programming models and languages using six criteria to assess their suitability ...
Many-core architectures face significant hurdles to successful adoption by ISVs, and ultimately, the...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any S...
Rapid evolution of computer processor architectures has spawned multiple programming languages and s...
It is today's general wisdom that the productive use of parallel architectures depends cruciall...
Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. U...
Orca is a language for implementing parallel applications on loosely coupled distri-buted systems. U...
Orca is a language for programming parallel applications on distributed computing systems. Although ...
Orca is a portable, object-based distributed shared memory (DSM) system. This article studies and ev...
Building the hardware for a high-performance distributed computer system is a lot easier than buildi...
Active Chart Parsing is an efficient strategy used to generate all possible parsings of a sentence g...
Parallel programming is widely acknowledged to be more difficult than sequential programming. One re...
2Writing parallel programs is difficult. Besides the inherent difficulties associ-ated with writing ...
Two paradigms for distributed shared memory on loosely‐coupled computing systems are compared: the s...
We survey parallel programming models and languages using six criteria to assess their suitability ...
Many-core architectures face significant hurdles to successful adoption by ISVs, and ultimately, the...
The shared data-object model is designed to ease the implementation of parallel applications on loos...
ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any S...
Rapid evolution of computer processor architectures has spawned multiple programming languages and s...
It is today's general wisdom that the productive use of parallel architectures depends cruciall...