In this paper, we have presented the design and evalu-ation of a compiler system, called APE, f o r automatic parallelization of scientific and engineering applications o n distributed m e m o r y computers. APE i s built o n top of S U I F compiler. It extends S U I F with capabilities in parallelizing loops with non-uniform cross-iteration dependencies, and in handling loops that have indirect access patterns. W e have evaluated the effectiveness of S U I F with several CFD test codes, and found that S U I F handles uni form loops over dense and regular data structures very well. For non-uniform loops, an innova-tive and efficient parallelization approach based o n con-vex theory have been proposed and i s being implemented. W e have also...
Abstract. This paper presents a compilation technique that performs automatic parallelization of can...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
This paper presents an overview of automatic program parallelization techniques. It covers dependenc...
Characteristics of full applications found in scientific computing industries today lead to challeng...
The goal of this dissertation is to give programmers the ability to achieve high performance by focu...
Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Compiling for parallelism is a longstanding topic of compiler research. This book describes the fund...
Parallel processing has been used to increase performance of computing systems for the past several ...
INTRODUCTION The SPMD (Single-Program Multiple-Data Stream) model has been widely adopted as the ba...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
Clusters of Symmetrical Multiprocessor machines are increasingly becoming the norm for high performa...
Over the past two decades tremendous progress has been made in both the design of parallel architect...
Parallel computing is regarded by most computer scientists as the most likely approach for significa...
Abstract. This paper presents a compilation technique that performs automatic parallelization of can...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
This paper presents an overview of automatic program parallelization techniques. It covers dependenc...
Characteristics of full applications found in scientific computing industries today lead to challeng...
The goal of this dissertation is to give programmers the ability to achieve high performance by focu...
Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Compiling for parallelism is a longstanding topic of compiler research. This book describes the fund...
Parallel processing has been used to increase performance of computing systems for the past several ...
INTRODUCTION The SPMD (Single-Program Multiple-Data Stream) model has been widely adopted as the ba...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
Clusters of Symmetrical Multiprocessor machines are increasingly becoming the norm for high performa...
Over the past two decades tremendous progress has been made in both the design of parallel architect...
Parallel computing is regarded by most computer scientists as the most likely approach for significa...
Abstract. This paper presents a compilation technique that performs automatic parallelization of can...
The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in seque...
This paper presents an overview of automatic program parallelization techniques. It covers dependenc...