Abstract An empirical study is presented that examines the potential to parallelize general-purpose software systems. The study is conducted on 13 open source systems comprising over 14 MLOC. Each for-loop is statically analyzed to determine if it can be parallelized or not. A for-loop that can be parallelized is termed a free loop. Free-loop can be easily parallelized using tools such as OpenMP. For the loops that cannot be parallelized, the various inhibitors to parallelization are determined and tabulated. The data shows that the most prevalent inhibitor by far, is functions called within for-loops that have side effects. This single inhibitor poses the greatest challenge in adapting and re-engineering systems to better utilize modern mu...
International audienceWith the multicore trend, the need for automatic parallelization is more prono...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Many sequential applications are difficult to parallelize because of unpredictable control flow, ind...
The performance of many parallel applications relies not on instruction-level parallelism but on loo...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
Previous research has shown existence of a huge potential of the coarse-grain parallelism in program...
Multi-core architectures have become more popular due to better performance, reduced heat dissipatio...
Parallel software is now required to exploit the abundance of threads and processors in modern multi...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
AbstractParallel for loop, a typical example of task parallelism assigns different iterations of the...
Parallel processing has been used to increase performance of computing systems for the past several ...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Graduation date: 2009General purpose computer systems have seen increased performance potential thro...
With the rise of chip-multiprocessors, the problem of parallelizing general-purpose programs has onc...
International audienceWith the multicore trend, the need for automatic parallelization is more prono...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Many sequential applications are difficult to parallelize because of unpredictable control flow, ind...
The performance of many parallel applications relies not on instruction-level parallelism but on loo...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 2012.Speculative parallelizatio...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
Previous research has shown existence of a huge potential of the coarse-grain parallelism in program...
Multi-core architectures have become more popular due to better performance, reduced heat dissipatio...
Parallel software is now required to exploit the abundance of threads and processors in modern multi...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
AbstractParallel for loop, a typical example of task parallelism assigns different iterations of the...
Parallel processing has been used to increase performance of computing systems for the past several ...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Graduation date: 2009General purpose computer systems have seen increased performance potential thro...
With the rise of chip-multiprocessors, the problem of parallelizing general-purpose programs has onc...
International audienceWith the multicore trend, the need for automatic parallelization is more prono...
Parallelization is a technique that boosts the performance of a program beyond optimizations of the ...
Many sequential applications are difficult to parallelize because of unpredictable control flow, ind...