Abstract. Dynamic compilation is becoming a dominant compilation technique. Runtime compilation has to avoid slow compile times by tar-geting optimizations to areas where it has a performance impact. For parallelization optimizations this can lead to not exposing opportuni-ties for parallelization. To enable fuller optimization we present a sim-ple interprocedural analysis. Our analysis and parallelization phases are performed as part of the Jikes RVM. Our approach succeeds in finding coarser grain loops and increased performance in a number of benchmark kernels on a research chip multi-processor architecture.
With the evolution of multi-core, multi-threaded processors from simple-scalar processors, the perfo...
Increasing the number of instructions executing in parallel has helped improve processor performance...
Graduation date: 2009General purpose computer systems have seen increased performance potential thro...
Previous research has shown existence of a huge potential of the coarse-grain parallelism in program...
. We present several new compiler techniques employed by our interprocedural parallelizing research ...
Research in automatic parallelization of loop-centric programs started with static analysis, then br...
We present an overview of our interprocedural analysis system, which applies the program analysis re...
If parallelism can be successfully exploited in a pro-gram, signicant reductions in execution time c...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Parallelizing compilers promise to exploit the parallelism available in a given program, particularl...
Over the past two decades tremendous progress has been made in both the design of parallel architect...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
While the chip multiprocessor (CMP) has quickly become the predominant processor architecture, its c...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
While automatic parallelization of loops usually relies on compile-time analysis of data dependences...
With the evolution of multi-core, multi-threaded processors from simple-scalar processors, the perfo...
Increasing the number of instructions executing in parallel has helped improve processor performance...
Graduation date: 2009General purpose computer systems have seen increased performance potential thro...
Previous research has shown existence of a huge potential of the coarse-grain parallelism in program...
. We present several new compiler techniques employed by our interprocedural parallelizing research ...
Research in automatic parallelization of loop-centric programs started with static analysis, then br...
We present an overview of our interprocedural analysis system, which applies the program analysis re...
If parallelism can be successfully exploited in a pro-gram, signicant reductions in execution time c...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Parallelizing compilers promise to exploit the parallelism available in a given program, particularl...
Over the past two decades tremendous progress has been made in both the design of parallel architect...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
While the chip multiprocessor (CMP) has quickly become the predominant processor architecture, its c...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
While automatic parallelization of loops usually relies on compile-time analysis of data dependences...
With the evolution of multi-core, multi-threaded processors from simple-scalar processors, the perfo...
Increasing the number of instructions executing in parallel has helped improve processor performance...
Graduation date: 2009General purpose computer systems have seen increased performance potential thro...