The aim of this discussion paper is to stimulate (or perhaps to provoke) stronger in-teractions among theoreticians and practitioners interested in efficient problem solutions. We emphasize computing abilities in relation to programming and specification languages, as these provide the user’s interface with the computer. Due to the breadth of the topic we mainly discuss efficiency as measured by computation time, ignoring space and other measures; and this on sequential, deterministic, individual, and fixed computers, not treating parallelism, stochastic algorithms, distributed comput-ing, or dynamically expanding architectures
A short elementary description of the problems of computing complexity for nonspecialists. On simple...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Amdahl's Law states that speedup in moving from one processor to N identical processors can nev...
12 pagesThe community of program optimisation and analysis, code performance evaluation, parallelisa...
The article describes various options for speeding up calculations on computer systems. These featur...
A preordering ⩽1 for comparing the computational complexity is introduced on the class of iterative ...
The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time h...
AbstractTwo “folk theorems” that permeate the parallel computation literature are reconsidered in th...
In high performance computing environments, we observe an ongoing increase in the available numbers ...
Dynamic programming is a standard technique used in optimization. It is well known that if a dynamic...
In the problem size-ensemble size plane, fixed-sized and scaled-sized paradigms have been the subset...
Blum’s speedup theorem is a major theorem in computational com-plexity, showing the existence of com...
In this paper we use arguments about the size of the computed functions to investigate the computati...
Computation time is an important performance metric that scientists and software engineers use to de...
We consider three paradigms of computation where the benefits of a parallel solution are greater tha...
A short elementary description of the problems of computing complexity for nonspecialists. On simple...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Amdahl's Law states that speedup in moving from one processor to N identical processors can nev...
12 pagesThe community of program optimisation and analysis, code performance evaluation, parallelisa...
The article describes various options for speeding up calculations on computer systems. These featur...
A preordering ⩽1 for comparing the computational complexity is introduced on the class of iterative ...
The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time h...
AbstractTwo “folk theorems” that permeate the parallel computation literature are reconsidered in th...
In high performance computing environments, we observe an ongoing increase in the available numbers ...
Dynamic programming is a standard technique used in optimization. It is well known that if a dynamic...
In the problem size-ensemble size plane, fixed-sized and scaled-sized paradigms have been the subset...
Blum’s speedup theorem is a major theorem in computational com-plexity, showing the existence of com...
In this paper we use arguments about the size of the computed functions to investigate the computati...
Computation time is an important performance metric that scientists and software engineers use to de...
We consider three paradigms of computation where the benefits of a parallel solution are greater tha...
A short elementary description of the problems of computing complexity for nonspecialists. On simple...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Amdahl's Law states that speedup in moving from one processor to N identical processors can nev...