Recent work on scheduling algorithms has resulted in provable bounds on the space taken by parallel computations in relation to the space taken by sequential computations. The results for online versions of these algorithms, however, have been limited to computations in which threads can only synchronize with ancestor or sibling threads. Such computations do not include languages with futures or user-specified synchronization constraints. Here we extend the results to languages with synchronization variables. Such languages include languages with futures, such as Multilisp and Cool, as well as other languages such as ID. The main result is an online scheduling algorithm which, given a computation with w work (total operations), s synchroni...
This paper addresses the problem of extracting the maximum synchronization-free parallelism that...
Multithreading has become a dominant paradigm in general purpose MIMD parallel computation. To execu...
Efficient synchronization is important for achieving good performance in parallel programs, especial...
In this paper, we present a randomized, online, space-efficient algorithm for the general class of p...
In this paper, we present a randomized, online, space-efficient algorithm for the general class of p...
Abstract The goal of high-level parallel programming models or languages is to facilitate the writin...
Many of today's high level parallel languages support dynamic, fine-grained parallelism. These ...
Many of today's high level parallel languages support dynamic, fine-grained parallelism. These ...
Abstract The running time and memory requirement of a parallel pro-gram with dynamic, lightweight th...
The running time and memory requirement of a parallel program with dynamic, lightweight threads depe...
this document are those of the author and should not be interpreted as representing the official pol...
Concurrent assignments are commonly used to describe synchronous parallel computations. We show how ...
Concurrent assignments are commonly used to describe synchronous parallel computations. We show how ...
. This paper considers the problem of scheduling dynamic parallel computations to achieve linear spe...
Concurrent assignments are commonly used to describe synchronous parallel computations. We show ho...
This paper addresses the problem of extracting the maximum synchronization-free parallelism that...
Multithreading has become a dominant paradigm in general purpose MIMD parallel computation. To execu...
Efficient synchronization is important for achieving good performance in parallel programs, especial...
In this paper, we present a randomized, online, space-efficient algorithm for the general class of p...
In this paper, we present a randomized, online, space-efficient algorithm for the general class of p...
Abstract The goal of high-level parallel programming models or languages is to facilitate the writin...
Many of today's high level parallel languages support dynamic, fine-grained parallelism. These ...
Many of today's high level parallel languages support dynamic, fine-grained parallelism. These ...
Abstract The running time and memory requirement of a parallel pro-gram with dynamic, lightweight th...
The running time and memory requirement of a parallel program with dynamic, lightweight threads depe...
this document are those of the author and should not be interpreted as representing the official pol...
Concurrent assignments are commonly used to describe synchronous parallel computations. We show how ...
Concurrent assignments are commonly used to describe synchronous parallel computations. We show how ...
. This paper considers the problem of scheduling dynamic parallel computations to achieve linear spe...
Concurrent assignments are commonly used to describe synchronous parallel computations. We show ho...
This paper addresses the problem of extracting the maximum synchronization-free parallelism that...
Multithreading has become a dominant paradigm in general purpose MIMD parallel computation. To execu...
Efficient synchronization is important for achieving good performance in parallel programs, especial...