Current Fortran optimizing compilers often include source to source transformations for automatic parallelization or vectorization of loops. Lower level optimizations, such as those that aim to exploit ILP, are performed at later stages at the assembly language level and do not profit from information available at the source code level, such as array subscripts for data dependence analysis. Low-level optimizations could generate better code if this high-level information were available. In this paper we describe a framework in which low-level code close to machine language maintains high-level constructs such as loops, source level variable names, etc. This allows for low and high-level optimizations to be performed in the same framework, g...
Over the past decade, microprocessor design strategies have focused on increasing the computational ...
In recent years, methods for analyzing and parallelizing sequential code using data analysis and loo...
Every compiler passes code through several stages, each a sort of mini- compiler of its own. Thus...
this paper we describe a strategy that will make it possible, after applying a small number of chang...
A machine description facility allows compiler writers to specify machine execution constraints to t...
Abstract. Optimizing compilers have a long history of applying loop transformations to C and Fortran...
Application codes reliably under perform the advertised performance of existing architectures, compi...
Abstract. Optimizing compilers have a long history of applying loop transformations to C and Fortran...
Most people write their programs in high-level languages because they want to develop their algorith...
International audienceThis paper presents a technique for representing the high level semantics of p...
This paper describes transformation techniques for out-of-core pro-grams (i.e., those that deal with...
UnrestrictedWe are facing an increasing performance gap between processor and memory speed on today'...
High Performance Fortran (HPF), as well as its predecessor FortranD,has attracted considerable atten...
Developing efficient programs for many of the current parallel computers is not easy due to the arch...
High performance Fortran (HPF), as well as its predecessor FortranD, has attracted considerable atte...
Over the past decade, microprocessor design strategies have focused on increasing the computational ...
In recent years, methods for analyzing and parallelizing sequential code using data analysis and loo...
Every compiler passes code through several stages, each a sort of mini- compiler of its own. Thus...
this paper we describe a strategy that will make it possible, after applying a small number of chang...
A machine description facility allows compiler writers to specify machine execution constraints to t...
Abstract. Optimizing compilers have a long history of applying loop transformations to C and Fortran...
Application codes reliably under perform the advertised performance of existing architectures, compi...
Abstract. Optimizing compilers have a long history of applying loop transformations to C and Fortran...
Most people write their programs in high-level languages because they want to develop their algorith...
International audienceThis paper presents a technique for representing the high level semantics of p...
This paper describes transformation techniques for out-of-core pro-grams (i.e., those that deal with...
UnrestrictedWe are facing an increasing performance gap between processor and memory speed on today'...
High Performance Fortran (HPF), as well as its predecessor FortranD,has attracted considerable atten...
Developing efficient programs for many of the current parallel computers is not easy due to the arch...
High performance Fortran (HPF), as well as its predecessor FortranD, has attracted considerable atte...
Over the past decade, microprocessor design strategies have focused on increasing the computational ...
In recent years, methods for analyzing and parallelizing sequential code using data analysis and loo...
Every compiler passes code through several stages, each a sort of mini- compiler of its own. Thus...