Usage of multiprocessor and multicore computers implies parallel programming. Tools for preparing parallel programs include parallel languages and libraries as well as parallelizing compilers and convertors that can perform automatic parallelization. The basic approach for parallelism detection is analysis of data dependencies and properties of program components, including data use and predicates. In this article a suite of used data and predicates sets for program components is proposed and an algorithm for computing these sets is suggested. The algorithm is based on wave propagation on graphs with cycles and labelling. This method allows analyzing complex program components, improving data localization and thus providing enhanced data pa...
technical reportAn abstract machine suitable for parallel graph reduction on a shared memory multipr...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Thesis (Ph. D.--University of Rochester. Dept. of Computer Science, 1991. Simultaneously published i...
This dissertation presents two new developments in the area of computer program preparation for para...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques...
With the rise of Chip multiprocessors (CMPs), the amount of parallel computing power will increase s...
Parallel computers can provide impressive speedups, but unfortunately such speedups are difficult to...
With the rise of Chip multiprocessors (CMPs), the amount of parallel computing power will increase s...
In the era of multicore processors, the responsibility for performance gains has been shifted onto s...
During the past decade, the degree of parallelism available in hardware has grown quickly and decisi...
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow....
The limited ability of compilers to nd the parallelism in programs is a signi cant barrier to the us...
technical reportAn abstract machine suitable for parallel graph reduction on a shared memory multipr...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Thesis (Ph. D.--University of Rochester. Dept. of Computer Science, 1991. Simultaneously published i...
This dissertation presents two new developments in the area of computer program preparation for para...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques...
With the rise of Chip multiprocessors (CMPs), the amount of parallel computing power will increase s...
Parallel computers can provide impressive speedups, but unfortunately such speedups are difficult to...
With the rise of Chip multiprocessors (CMPs), the amount of parallel computing power will increase s...
In the era of multicore processors, the responsibility for performance gains has been shifted onto s...
During the past decade, the degree of parallelism available in hardware has grown quickly and decisi...
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow....
The limited ability of compilers to nd the parallelism in programs is a signi cant barrier to the us...
technical reportAn abstract machine suitable for parallel graph reduction on a shared memory multipr...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...