AbstractParallel programming faces two major challenges: how to efficiently map computations to different parallel hardware architectures, and how to do it in a modular way, i.e., without rewriting the problem solving code. We propose to treat dependencies as first class entities in programs. Programming a highly parallel machine or chip can then be formulated as finding an efficient embedding of the computation’s data dependency into the underlying hardware’s communication layout. With the data dependency pattern of a computation extracted as an explicit entity in a program, one has a powerful tool to deal with parallelism
The presence of a universal machine model for serial algorithm design, namely the von Neumann model,...
Parallel computers can provide impressive speedups, but unfortunately such speedups are difficult to...
The principal premise of this paper is that as a field, we do not currently have a suitable conceptu...
Computational devices are rapidly evolving into massively parallel systems. Multicore processors ar...
In parallel programming, the need to manage communication, load imbalance, and irregular-ities in th...
For next generation applications, programmers will be required to adapt to a new style of programmin...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
Designing parallel codes is hard. One of the most important roadblocks to parallel programming is th...
With the rise of Chip multiprocessors (CMPs), the amount of parallel computing power will increase s...
The emergence of multicore processors has increased the need for simple parallel programming models ...
Parallel machines are becoming increasingly cheap and more easily available. Commercial companies ha...
In this paper, we present a novel method for parallelizing imperative programs in the presence of dy...
Since processor performance scalability will now mostly be achieved through thread-level parallelism...
The topic of intermediate languages for optimizing and parallelizing compilers has received much at...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
The presence of a universal machine model for serial algorithm design, namely the von Neumann model,...
Parallel computers can provide impressive speedups, but unfortunately such speedups are difficult to...
The principal premise of this paper is that as a field, we do not currently have a suitable conceptu...
Computational devices are rapidly evolving into massively parallel systems. Multicore processors ar...
In parallel programming, the need to manage communication, load imbalance, and irregular-ities in th...
For next generation applications, programmers will be required to adapt to a new style of programmin...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
Designing parallel codes is hard. One of the most important roadblocks to parallel programming is th...
With the rise of Chip multiprocessors (CMPs), the amount of parallel computing power will increase s...
The emergence of multicore processors has increased the need for simple parallel programming models ...
Parallel machines are becoming increasingly cheap and more easily available. Commercial companies ha...
In this paper, we present a novel method for parallelizing imperative programs in the presence of dy...
Since processor performance scalability will now mostly be achieved through thread-level parallelism...
The topic of intermediate languages for optimizing and parallelizing compilers has received much at...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
The presence of a universal machine model for serial algorithm design, namely the von Neumann model,...
Parallel computers can provide impressive speedups, but unfortunately such speedups are difficult to...
The principal premise of this paper is that as a field, we do not currently have a suitable conceptu...