This paper presents three novel language implementation primitives—lazy threads, stacklets, and synchronizers—andshows how they combine to provide a parallel call at nearly the efficiency of a sequential call. The central idea is to transform parallel calls into parallel-ready sequential calls. Excess parallelism degrades into sequential calls with the attendant efficient stack management and direct transfer of control and data, unless a call truly needs to execute in parallel, in which case it gets its own thread of control. We show how these techniques can be applied to distribute work efficiently on multiprocessors
In this paper, we present a relatively primitive execution model for fine-grain parallelism, in whic...
This paper describes methods to adapt existing optimizing compilers for sequential languages to prod...
Coarse-grained task parallelism exists in sequential code and can be leveraged to boost the use of ...
In this paper we describe lazy threads, a new approach for implementing multi-threaded execution mod...
Many modern parallel languages support dynamic creation of threads or require multithreading in thei...
Many modern parallel languages support dynamic creation of threads or require multithreading in thei...
This paper describes parallelizing compilers which allow programmers to tune parallel program perfor...
Abstract: Tolerance to communication latency and inexpensive synchronization are critical for genera...
Associated research group: Minnesota Extensible Language ToolsThis paper describes parallelizing com...
This thesis studies efficient runtime systems for parallelism management (multithreading) and memory...
The challenge of programming many-core architectures efficiently and effectively requires models and...
Developing efficient programs for many of the current parallel computers is not easy due to the arch...
International audienceWe propose an abstraction to alleviate the difficulty of programming with thre...
An ideal language for parallel programming will have to satisfy simultaneously many conflicting requ...
High-level programming languages and exotic architectures have often been devel-oped together, becau...
In this paper, we present a relatively primitive execution model for fine-grain parallelism, in whic...
This paper describes methods to adapt existing optimizing compilers for sequential languages to prod...
Coarse-grained task parallelism exists in sequential code and can be leveraged to boost the use of ...
In this paper we describe lazy threads, a new approach for implementing multi-threaded execution mod...
Many modern parallel languages support dynamic creation of threads or require multithreading in thei...
Many modern parallel languages support dynamic creation of threads or require multithreading in thei...
This paper describes parallelizing compilers which allow programmers to tune parallel program perfor...
Abstract: Tolerance to communication latency and inexpensive synchronization are critical for genera...
Associated research group: Minnesota Extensible Language ToolsThis paper describes parallelizing com...
This thesis studies efficient runtime systems for parallelism management (multithreading) and memory...
The challenge of programming many-core architectures efficiently and effectively requires models and...
Developing efficient programs for many of the current parallel computers is not easy due to the arch...
International audienceWe propose an abstraction to alleviate the difficulty of programming with thre...
An ideal language for parallel programming will have to satisfy simultaneously many conflicting requ...
High-level programming languages and exotic architectures have often been devel-oped together, becau...
In this paper, we present a relatively primitive execution model for fine-grain parallelism, in whic...
This paper describes methods to adapt existing optimizing compilers for sequential languages to prod...
Coarse-grained task parallelism exists in sequential code and can be leveraged to boost the use of ...