Execution of a program almost always involves multiple address spaces, possibly across separate machines. Here, an approach to reducing such costs using compiler optimization techniques is presented. This paper elaborates on the overall vision, and as a concrete example, describes how this compiler assisted approach can be applied to the optimization of system call performance on a single host. Preliminary results suggest that this approach has the potential to improve performance significantly depending on the program’s system call behavior.
The performance of the memory hierarchy has become one of the most critical elements in the performa...
Abstract. This paper describes how the use of software libraries, which is prevalent in high perform...
This paper describes transformation techniques for out-of-core pro-grams (i.e., those that deal with...
As systems become more complex, there are increasing demands for improvement with respect to attribu...
This paper presents a new approach to solving the DSP address assignment problem. A minimum cost cir...
This paper presents a new approach to solving the DSP address code generation problem. A minimum cos...
Partitioned Global Address Space (PGAS) languages promise to deliver improved programmer productivi...
Most compiler optimizations focus on saving time and sometimes occur at the expense of increasing si...
As transistors sizes shrink and architects put more and more cores on chip, computer systems become ...
This paper presents compiler algorithms to optimize out-of-core programs. These algorithms consider ...
To execute a shared memory program efficiently, we have to manage memory consistency with low overhe...
To meet the demands of modern architectures, optimizing compilers must incorporate an ever larger nu...
This paper presents DSP code optimization techniques, which originate from dedicated memory address ...
Compiler optimizations are difficult to implement and add complexity to a compiler. For this reason,...
The highest optimization level of a compiler, such as-O3 in GCC, does not ensure the best performanc...
The performance of the memory hierarchy has become one of the most critical elements in the performa...
Abstract. This paper describes how the use of software libraries, which is prevalent in high perform...
This paper describes transformation techniques for out-of-core pro-grams (i.e., those that deal with...
As systems become more complex, there are increasing demands for improvement with respect to attribu...
This paper presents a new approach to solving the DSP address assignment problem. A minimum cost cir...
This paper presents a new approach to solving the DSP address code generation problem. A minimum cos...
Partitioned Global Address Space (PGAS) languages promise to deliver improved programmer productivi...
Most compiler optimizations focus on saving time and sometimes occur at the expense of increasing si...
As transistors sizes shrink and architects put more and more cores on chip, computer systems become ...
This paper presents compiler algorithms to optimize out-of-core programs. These algorithms consider ...
To execute a shared memory program efficiently, we have to manage memory consistency with low overhe...
To meet the demands of modern architectures, optimizing compilers must incorporate an ever larger nu...
This paper presents DSP code optimization techniques, which originate from dedicated memory address ...
Compiler optimizations are difficult to implement and add complexity to a compiler. For this reason,...
The highest optimization level of a compiler, such as-O3 in GCC, does not ensure the best performanc...
The performance of the memory hierarchy has become one of the most critical elements in the performa...
Abstract. This paper describes how the use of software libraries, which is prevalent in high perform...
This paper describes transformation techniques for out-of-core pro-grams (i.e., those that deal with...