International audienceIn this paper, the problem of evaluating the performance of parallel programs generated by data parallel compilers is studied. These compilers take as input an application written in a sequential language augmented with data distribution directives and produce a parallel version based on the specifed partitioning of data. A methodology for evaluating the relationships existing among the program characteristics, the data distribution adopted, and the performance indices measured during the program execution is described. It consists of three phases: a "static" description of the program under study, a "dynamic" description, based on the measurement and the analysis of its execution on a real system, and the construction...
Many problems currently require more processor throughput than can be achieved with current single-p...
For a wide variety of applications, both task and data parallelism must be exploited to achieve the ...
The dynamic evaluation of parallelizing compilers and the programs to which they are applied is a fi...
International audienceIn this paper, the problem of evaluating the performance of parallel programs ...
The area of parallelizing compilers for distributed memory multicomputers has seen considerable rese...
International audienceIn this paper, we present the overall design of Pandore II, an Environment ded...
A new approach to monitoring the runtime behaviour of parallel programs will be presented. Our appro...
Parallelization of programs for distributed memory parallel computers is always difficult because of...
Fully utilizing the potential of parallel architectures is known to be a challenging task. In the pa...
this paper, we show some performance results from an implemention of a data-parallel programming lan...
Despite the performance potential of parallel systems, several factors have hindered their widesprea...
this report we have described how two methods for automatically determining convenient data distribu...
160 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1992.Distributed-memory parallel c...
(eng) In the data parallel programming style the user usually specifies the data parallelism explici...
To support the transition from programming languages in which parallelism and communication are expl...
Many problems currently require more processor throughput than can be achieved with current single-p...
For a wide variety of applications, both task and data parallelism must be exploited to achieve the ...
The dynamic evaluation of parallelizing compilers and the programs to which they are applied is a fi...
International audienceIn this paper, the problem of evaluating the performance of parallel programs ...
The area of parallelizing compilers for distributed memory multicomputers has seen considerable rese...
International audienceIn this paper, we present the overall design of Pandore II, an Environment ded...
A new approach to monitoring the runtime behaviour of parallel programs will be presented. Our appro...
Parallelization of programs for distributed memory parallel computers is always difficult because of...
Fully utilizing the potential of parallel architectures is known to be a challenging task. In the pa...
this paper, we show some performance results from an implemention of a data-parallel programming lan...
Despite the performance potential of parallel systems, several factors have hindered their widesprea...
this report we have described how two methods for automatically determining convenient data distribu...
160 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1992.Distributed-memory parallel c...
(eng) In the data parallel programming style the user usually specifies the data parallelism explici...
To support the transition from programming languages in which parallelism and communication are expl...
Many problems currently require more processor throughput than can be achieved with current single-p...
For a wide variety of applications, both task and data parallelism must be exploited to achieve the ...
The dynamic evaluation of parallelizing compilers and the programs to which they are applied is a fi...