Code motion is well-known as a powerful technique for the optimization of sequential programs. It improves the run-time efficiency by avoiding unnecessary recomputations of values, and it is even possible to obtain computationally optimal results, i.e., results where no program path can be improved any further by means of semantics preserving code motion. In this paper we present a code motion algorithm that for the first time achieves this optimality result for parallel programs. Fundamental is the framework of [KSV1] showing how to perform optimal bitvector analyses for parallel programs as easily and as efficiently as for sequential ones. Moreover, the analyses can easily be adapted from their sequential counterparts. This is demonstrate...
We study the parallel computation of dynamic programming. We consider four important dynamic program...
Most current compiler analysis techniques are unable to cope with the semantics introduced by explic...
Efficient performance tuning of parallel programs is often hard. Optimization is often done when the...
Parallel languages are of growing interest, as they are more and more supported by modern hardware e...
AbstractEliminating partially dead code has proved to be a powerful technique for the runtime optimi...
this paper, we emphasize the practicality of lazy code motion by giving explicit directions for its ...
We present a transformational system for extracting parallelism from programs. Our transformations g...
An implementation-oriented algorithm for lazy code motion is presented that minimizes the number of ...
We present a bit-vector algorithm for the optimal and economical placement of computations within fl...
In this paper we address a resource-constrained optimization problem for behavioral descriptions con...
In parallel programming, the challenges in optimizing the codes in general are more than that for s...
In this paper we address a resource--constrained optimization problem for behavioral descriptions co...
AbstractThis paper provides a unifying mathematical proof which replaces a mechanical certification ...
In the high-level synthesis of ASICs or in the code generation for ASIPs, the presence of conditiona...
this paper focuses on lazy code motion as proposed by Knoop, Ruthing, and Steffen and modified by Dr...
We study the parallel computation of dynamic programming. We consider four important dynamic program...
Most current compiler analysis techniques are unable to cope with the semantics introduced by explic...
Efficient performance tuning of parallel programs is often hard. Optimization is often done when the...
Parallel languages are of growing interest, as they are more and more supported by modern hardware e...
AbstractEliminating partially dead code has proved to be a powerful technique for the runtime optimi...
this paper, we emphasize the practicality of lazy code motion by giving explicit directions for its ...
We present a transformational system for extracting parallelism from programs. Our transformations g...
An implementation-oriented algorithm for lazy code motion is presented that minimizes the number of ...
We present a bit-vector algorithm for the optimal and economical placement of computations within fl...
In this paper we address a resource-constrained optimization problem for behavioral descriptions con...
In parallel programming, the challenges in optimizing the codes in general are more than that for s...
In this paper we address a resource--constrained optimization problem for behavioral descriptions co...
AbstractThis paper provides a unifying mathematical proof which replaces a mechanical certification ...
In the high-level synthesis of ASICs or in the code generation for ASIPs, the presence of conditiona...
this paper focuses on lazy code motion as proposed by Knoop, Ruthing, and Steffen and modified by Dr...
We study the parallel computation of dynamic programming. We consider four important dynamic program...
Most current compiler analysis techniques are unable to cope with the semantics introduced by explic...
Efficient performance tuning of parallel programs is often hard. Optimization is often done when the...