Automatic partitioning, scheduling and code generation are of major importance in the development of compilers for massively parallel architectures. In this thesis we consider these problems, propose efficient algorithms and analyze their performances for automatic scheduling and code generation. In the first part of this thesis, we consider compile-time static scheduling when communication overhead is not negligible. We provide a new quantitative analysis of granularity issues to identify the impact of partitioning on optimal scheduling. We propose a new algorithm for scheduling on an unbounded number of processors named DSC, which outperforms existing algorithms in both complexity and performance. Furthermore, we study algorithms for sche...
The objective of this research is to propose a low-complexity static scheduling and allocation algor...
In this paper we present several algorithms for decomposing all-to-many personalized communication i...
Communication overhead is one of the main factors that can limit the speedup of parallel programs on...
We describe a parallel programming tool for scheduling static task graphs and generating the appropr...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Inter-process communication and scheduling are notorious problem areas in the design of real-time sy...
This thesis presents a new unified algorithm for cluster assignment and acyclic region scheduling in ...
Task mapping and scheduling are two very difficult problems that must be addressed when a sequential...
In this paper, we study the problem of scheduling parallel loops at compile-time for a heterogeneous...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1993. Simultaneously published...
This thesis explores a fundamental issue in large-scale parallel computing: how to schedule tasks on...
In this paper, we propose a parallel randomized algorithm, called Parallel Fast Assignment using Sea...
Parallel computer systems with distributed shared memory have a physically distributed main memory a...
Code generation in a compiler is commonly divided into several phases: instruction selection, schedu...
220 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1986.This dissertation discusses s...
The objective of this research is to propose a low-complexity static scheduling and allocation algor...
In this paper we present several algorithms for decomposing all-to-many personalized communication i...
Communication overhead is one of the main factors that can limit the speedup of parallel programs on...
We describe a parallel programming tool for scheduling static task graphs and generating the appropr...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Inter-process communication and scheduling are notorious problem areas in the design of real-time sy...
This thesis presents a new unified algorithm for cluster assignment and acyclic region scheduling in ...
Task mapping and scheduling are two very difficult problems that must be addressed when a sequential...
In this paper, we study the problem of scheduling parallel loops at compile-time for a heterogeneous...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1993. Simultaneously published...
This thesis explores a fundamental issue in large-scale parallel computing: how to schedule tasks on...
In this paper, we propose a parallel randomized algorithm, called Parallel Fast Assignment using Sea...
Parallel computer systems with distributed shared memory have a physically distributed main memory a...
Code generation in a compiler is commonly divided into several phases: instruction selection, schedu...
220 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1986.This dissertation discusses s...
The objective of this research is to propose a low-complexity static scheduling and allocation algor...
In this paper we present several algorithms for decomposing all-to-many personalized communication i...
Communication overhead is one of the main factors that can limit the speedup of parallel programs on...