In high performance computing environments, we observe an ongoing increase in the available numbers of cores. This development calls for re-emphasizing performance (scalability) analysis and speedup laws as suggested in the literature (e.g., Amdahl's law and Gustafson's law), with a focus on asymptotic performance. Understanding speedup and efficiency issues of algorithmic parallelism is useful for several purposes, including the optimization of system operations, temporal predictions on the execution of a program, and the analysis of asymptotic properties and the determination of speedup bounds. However, the literature is fragmented and shows a large diversity and heterogeneity of speedup models and laws. These phenomena make it challengin...
Amdahl's Law states that speedup in moving from one processor to N identical processors can nev...
PhD ThesisIt is likely that many-core processor systems will continue to penetrate emerging embedde...
The aim of this discussion paper is to stimulate (or perhaps to provoke) stronger in-teractions amon...
The effective use of computational resources requires a good understanding of parallel architectures...
This paper studies the speedup for multi-level parallel computing. Two models of parallel speedup ar...
In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time ...
An asymptotic scalability metric, called Constant-Memory-per-Processor (CMP) scalability, is present...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Performance analysis tools are essential to the maintenance of efficient parallel execution of scien...
We propose a new model for parallel speedup that is based on two parameters, the average parallelism...
Using Amdahl’s law as a metric, the authors illustrate a technique for developing efficient code on ...
International audienceAs datasets continue to increase in size and multi-core computer architectures...
An important issue in the effective use of parallel processing is the estimation of the speed-up one...
Amdahl\u27s Law states that speedup in moving from one processor to N identical processors can never...
Generalized speedup is defined as parallel speed over sequential speed. The generalized speedup and ...
Amdahl's Law states that speedup in moving from one processor to N identical processors can nev...
PhD ThesisIt is likely that many-core processor systems will continue to penetrate emerging embedde...
The aim of this discussion paper is to stimulate (or perhaps to provoke) stronger in-teractions amon...
The effective use of computational resources requires a good understanding of parallel architectures...
This paper studies the speedup for multi-level parallel computing. Two models of parallel speedup ar...
In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time ...
An asymptotic scalability metric, called Constant-Memory-per-Processor (CMP) scalability, is present...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Performance analysis tools are essential to the maintenance of efficient parallel execution of scien...
We propose a new model for parallel speedup that is based on two parameters, the average parallelism...
Using Amdahl’s law as a metric, the authors illustrate a technique for developing efficient code on ...
International audienceAs datasets continue to increase in size and multi-core computer architectures...
An important issue in the effective use of parallel processing is the estimation of the speed-up one...
Amdahl\u27s Law states that speedup in moving from one processor to N identical processors can never...
Generalized speedup is defined as parallel speed over sequential speed. The generalized speedup and ...
Amdahl's Law states that speedup in moving from one processor to N identical processors can nev...
PhD ThesisIt is likely that many-core processor systems will continue to penetrate emerging embedde...
The aim of this discussion paper is to stimulate (or perhaps to provoke) stronger in-teractions amon...