In order to achieve practical efficient execution on a par-allel architecture, a knowledge of the data dependencies re-lated to the application appears as the key point for build-ing an efficient schedule. By restricting accesses in shared memory, we show that such a data dependency graph can be computed on-line on a distributed architecture. The over-head introduced is bounded with respect to the parallelism expressed by the user: each basic computation corresponds to a user-defined task, each data-dependency to a user-defined data structure. We introduce a language named Athapascan-1 that allows built a graph of dependencies from a strong typing of shared memory accesses. We detail compilation and implementa-tion of the language. Besides,...
In parallel programming, the need to manage communication, load imbalance, and irregular-ities in th...
Running programs across multiple nodes in a cluster of networked computers, such as in a supercomput...
Parallel computers offer an interesting alternative for the applications of scientific computation, ...
The topic of this thesis is the modelisation by a data-flow graph of any execution of a parallel app...
The topic of intermediate languages for optimizing and parallelizing compilers has received much at...
In a parallel programming environment, the load sharing module - or application level scheduler - ma...
Athapascan is a macro data-flow application programming interface (API) for asynchronous parallel pr...
Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for int...
Task graphs or dependence graphs are used in runtime systems to schedule tasks for parallel executio...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Jury: Carmel, Denis (Rapporteur); Priol, Thierry (Rapporteur); Morhr, Roger (Président)In this phd, ...
Parallel programming is hard and programmers still struggle to write code for shared memory multicor...
Data flow is a mode of parallel computation in which parallelism in a program can be exploited at th...
(eng) We describe the compilation and execution of data-parallel languages for networks of workstati...
Scientific workflows are frequently modeled as Directed Acyclic Graphs (DAGs) oftasks, which represe...
In parallel programming, the need to manage communication, load imbalance, and irregular-ities in th...
Running programs across multiple nodes in a cluster of networked computers, such as in a supercomput...
Parallel computers offer an interesting alternative for the applications of scientific computation, ...
The topic of this thesis is the modelisation by a data-flow graph of any execution of a parallel app...
The topic of intermediate languages for optimizing and parallelizing compilers has received much at...
In a parallel programming environment, the load sharing module - or application level scheduler - ma...
Athapascan is a macro data-flow application programming interface (API) for asynchronous parallel pr...
Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for int...
Task graphs or dependence graphs are used in runtime systems to schedule tasks for parallel executio...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Jury: Carmel, Denis (Rapporteur); Priol, Thierry (Rapporteur); Morhr, Roger (Président)In this phd, ...
Parallel programming is hard and programmers still struggle to write code for shared memory multicor...
Data flow is a mode of parallel computation in which parallelism in a program can be exploited at th...
(eng) We describe the compilation and execution of data-parallel languages for networks of workstati...
Scientific workflows are frequently modeled as Directed Acyclic Graphs (DAGs) oftasks, which represe...
In parallel programming, the need to manage communication, load imbalance, and irregular-ities in th...
Running programs across multiple nodes in a cluster of networked computers, such as in a supercomput...
Parallel computers offer an interesting alternative for the applications of scientific computation, ...