DG III, European Commission;Russian Academy of Sciences;Russian Foundation for Basic Research;Russian State Committee of Higher Education;Yaroslavl Regional Government4th International Conference on Parallel Computing Technologies, PaCT 1997 -- 8 September 1997 through 12 September 1997 -- 147199In this study, we develop a new static scheduling scheme which integrates parallel programming environments with parallel database systems to optimize program execution. In parallel programming, a sequential program is first converted to a task graph either with programmer guidance or by a restructuring compiler. Next, a scheduling algorithm assigns the nodes of the task graph to processors. However a question arises when some tasks have to access a...
The development of networksand multi-processor computers has allowed us to solve problems in paralle...
Parallel computer systems with distributed shared memory have a physically distributed main memory a...
In the current work, we derive a complete approach to optimization and automatic parallelization of ...
© Springer-Verlag Berlin Heidelberg 1997.In this study, we develop a new static scheduling scheme wh...
In this paper, we investigate two scheduling approaches for multicomputer-based parallel database sy...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Amount of data stored in enterprises are increasing rapidly. Volume of data stored in database is ap...
Communicated by Susumu Matsumae This paper studies task scheduling algorithms which schedule a set o...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Parallel database machines are meant to obtain high performance in transaction processing, both in t...
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an ed...
In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted...
One of the main obstacles in obtaining high performance from message-passing multicomputer systems i...
algorithm for compile-time static scheduling of task graphs onto multiprocessors is proposed. The pr...
The development of networksand multi-processor computers has allowed us to solve problems in paralle...
Parallel computer systems with distributed shared memory have a physically distributed main memory a...
In the current work, we derive a complete approach to optimization and automatic parallelization of ...
© Springer-Verlag Berlin Heidelberg 1997.In this study, we develop a new static scheduling scheme wh...
In this paper, we investigate two scheduling approaches for multicomputer-based parallel database sy...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
Amount of data stored in enterprises are increasing rapidly. Volume of data stored in database is ap...
Communicated by Susumu Matsumae This paper studies task scheduling algorithms which schedule a set o...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Parallel database machines are meant to obtain high performance in transaction processing, both in t...
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an ed...
In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted...
One of the main obstacles in obtaining high performance from message-passing multicomputer systems i...
algorithm for compile-time static scheduling of task graphs onto multiprocessors is proposed. The pr...
The development of networksand multi-processor computers has allowed us to solve problems in paralle...
Parallel computer systems with distributed shared memory have a physically distributed main memory a...
In the current work, we derive a complete approach to optimization and automatic parallelization of ...