For decades, compilers have relied on dependence analysis to deter-mine the legality of their transformations. While this conservative approach has enabled many robust optimizations, when it comes to parallelization there are many opportunities that can only be ex-ploited by changing or re-ordering the dependences in the program. This paper presents ALTER: a system for identifying and enforc-ing parallelism that violates certain dependences while preserving overall program functionality. Based on programmer annotations, ALTER exploits new parallelism in loops by reordering iterations or allowing stale reads. ALTER can also infer which annotations are likely to benefit the program by using a test-driven framework. Our evaluation of ALTER dem...
Maximizing performance on modern multicore hardware demands aggressive optimizations. Large amountso...
Compiling for parallelism is a longstanding topic of compiler research. This book describes the fund...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques...
Parallelization transformations are an important vehicle for improving the performance and scalabili...
Existing compilers often fail to parallelize sequential code, even when a program can be manually...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Previous research has shown existence of a huge potential of the coarse-grain parallelism in program...
Abstract — Business demands for better computing power because the cost of hardware is declining day...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
The limited ability of compilers to nd the parallelism in programs is a signi cant barrier to the us...
This research contributes two advances to the field of empirical study of parallel programming: firs...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Maximizing performance on modern multicore hardware demands aggressive optimizations. Large amountso...
Compiling for parallelism is a longstanding topic of compiler research. This book describes the fund...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques...
Parallelization transformations are an important vehicle for improving the performance and scalabili...
Existing compilers often fail to parallelize sequential code, even when a program can be manually...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Previous research has shown existence of a huge potential of the coarse-grain parallelism in program...
Abstract — Business demands for better computing power because the cost of hardware is declining day...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
The limited ability of compilers to nd the parallelism in programs is a signi cant barrier to the us...
This research contributes two advances to the field of empirical study of parallel programming: firs...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Maximizing performance on modern multicore hardware demands aggressive optimizations. Large amountso...
Compiling for parallelism is a longstanding topic of compiler research. This book describes the fund...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...