Parallel computer systems with distributed shared memory have a physically distributed main memory and a global coherent address space for programs. When programming these machines, two aspects have to be handled carefully to efficiently utilize these computers especially with a large number of processors: load balance between processors and minimization of interprocessor communication. This thesis introduces two new techniques both based on the principle that parallel tasks are scheduled in a specific manner to processors. The first technique, the user-driven scheduling, was developed for programs where the execution time of tasks can be estimated by the programmer and where the memory access pattern is not complex. Templates are abstract ...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Existing heuristics for scheduling a node and edge weighted directed task graph to multiple processo...
Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a p...
To optimize programs for parallel computers with distributed shared memory two main problems need to...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an ed...
In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted...
Abstract — Many parallel applications from scientic computing show a modular structure and are there...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1993. Simultaneously published...
Parallelrechner mit Distributed Shared Memory haben einen physikalisch verteilten Speicher und globa...
The development of networksand multi-processor computers has allowed us to solve problems in paralle...
Scheduling is very important for an efficient utilization of modern parallel computing systems. In t...
Communicated by Susumu Matsumae This paper studies task scheduling algorithms which schedule a set o...
Scheduling computations with communications is the theoretical basis for achiev-ing ecient paralleli...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Existing heuristics for scheduling a node and edge weighted directed task graph to multiple processo...
Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a p...
To optimize programs for parallel computers with distributed shared memory two main problems need to...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an ed...
In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted...
Abstract — Many parallel applications from scientic computing show a modular structure and are there...
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1993. Simultaneously published...
Parallelrechner mit Distributed Shared Memory haben einen physikalisch verteilten Speicher und globa...
The development of networksand multi-processor computers has allowed us to solve problems in paralle...
Scheduling is very important for an efficient utilization of modern parallel computing systems. In t...
Communicated by Susumu Matsumae This paper studies task scheduling algorithms which schedule a set o...
Scheduling computations with communications is the theoretical basis for achiev-ing ecient paralleli...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Existing heuristics for scheduling a node and edge weighted directed task graph to multiple processo...
Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a p...