We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocesso...
Languages such as Fortran D provide irregular distribution schemes that can efficiently support irre...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Many parallel programs require run-time support to implement the communication caused by indirect da...
This paper describes two new ideas by which an HPF compiler can deal with irregular computations eff...
Outlined here are two methods which we believe will play an important role in any distributed memory...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
This paper outlines two methods which we believe will play an important role in any distributed memo...
This paper presents methods that make it possible to efficiently support irregular problems using da...
This paper describes two new ideas by which an HPF compiler can deal with irregular computations eff...
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture indepen...
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture indepen...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture indepen...
In recent years, distributed memory parallel machines have been widely recognized as the most likely...
Languages such as Fortran D provide irregular distribution schemes that can efficiently support irre...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Many parallel programs require run-time support to implement the communication caused by indirect da...
This paper describes two new ideas by which an HPF compiler can deal with irregular computations eff...
Outlined here are two methods which we believe will play an important role in any distributed memory...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
This paper outlines two methods which we believe will play an important role in any distributed memo...
This paper presents methods that make it possible to efficiently support irregular problems using da...
This paper describes two new ideas by which an HPF compiler can deal with irregular computations eff...
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture indepen...
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture indepen...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture indepen...
In recent years, distributed memory parallel machines have been widely recognized as the most likely...
Languages such as Fortran D provide irregular distribution schemes that can efficiently support irre...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...
Current parallelizing compilers cannot identify a significant fraction of parallelizable loops becau...