This thesis presents a novel program parallelization technique incorporating with dynamic and static scheduling. It utilizes a problem specific pattern developed from the prior knowledge of the targeted problem abstraction. Suitable for solving complex parallelization problems such as data intensive all-to-all comparison constrained by memory, the technique delivers more robust and faster task scheduling compared to the state-of-the art techniques. Good performance is achieved from the technique in data intensive bioinformatics applications
Dynamic programming (DP) is a popular and efficient technique in many scientific applications such a...
Optimal multiple sequence alignment by dynamic programming, like many highly dimensional scientific ...
It has become common knowledge that parallel programming is needed for scientific applications, part...
This document surveys the computational strategies followed to parallelize the most used software in...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
DG III, European Commission;Russian Academy of Sciences;Russian Foundation for Basic Research;Russia...
Achieving optimal throughput by extracting parallelism in behavioral synthesis often exaggerates mem...
Most static algorithms that schedule parallel programs represented by macro dataflow graphs are sequ...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
This paper addresses the problem of load balancing data-parallel computations on heterogeneous and t...
Task scheduling in parallel multiple sequence alignment (MSA) through improved dynamic programming o...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
In the current work, we derive a complete approach to optimization and automatic parallelization of ...
(eng) The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems....
The recent shift to multi-core computing has meant more programmers are required to write parallel p...
Dynamic programming (DP) is a popular and efficient technique in many scientific applications such a...
Optimal multiple sequence alignment by dynamic programming, like many highly dimensional scientific ...
It has become common knowledge that parallel programming is needed for scientific applications, part...
This document surveys the computational strategies followed to parallelize the most used software in...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
DG III, European Commission;Russian Academy of Sciences;Russian Foundation for Basic Research;Russia...
Achieving optimal throughput by extracting parallelism in behavioral synthesis often exaggerates mem...
Most static algorithms that schedule parallel programs represented by macro dataflow graphs are sequ...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
This paper addresses the problem of load balancing data-parallel computations on heterogeneous and t...
Task scheduling in parallel multiple sequence alignment (MSA) through improved dynamic programming o...
Static scheduling of a program represented by a directed task graph on a multiprocessor system to mi...
In the current work, we derive a complete approach to optimization and automatic parallelization of ...
(eng) The memory usage of sparse direct solvers can be the bottleneck to solve large-scale problems....
The recent shift to multi-core computing has meant more programmers are required to write parallel p...
Dynamic programming (DP) is a popular and efficient technique in many scientific applications such a...
Optimal multiple sequence alignment by dynamic programming, like many highly dimensional scientific ...
It has become common knowledge that parallel programming is needed for scientific applications, part...