Various discussions relating to computers comment on a reasonable extent of random access memory (RAM) increase as it is a known fact that the extention of this type of memory influences speed of computer machines. Disputes often arise as to whether half a gigabyte extension of RAM is large enough for the computers to be significantly sped up, given the complexity of present software applications. In this article, we test statistically whether such an increase speeds up computers significantly or not, using analysis of covariance as a suitable statistical tool
Characterizing static random access memories (SRAMs) is difficult but necessary to understandits pro...
This paper deals with the estimation of the long-run variance of a stationary sequence. We extend th...
Many situations call for an estimation of the execution time of applications, e.g., during design or...
Various discussions relating to computers comment on a reasonable extent of random access memory (RA...
Measuring performance and quantifying a performance change are core evaluation techniques in program...
Most software is contructed on the assumption that the programs and data are stored in random access...
In the above raport the usage of the statistical methods to predict the efficiency of the parallel a...
Article first published online: 15 OCT 2012International audienceIn the area of high performance com...
The problem of learning parallel computer performance is investigated in the context of multicore pr...
In the area of high performance computing and embedded systems, numerous code optimisation methods e...
Performance testing is a mean used to evaluate speed of software projects. In an ideal state a proje...
If cluster C1 consists of computers with a faster mean speed than the computers in cluster C2, does ...
Although parallel computers have existed for many years, recently there has been a surge of academic...
AbstractWe construct a two-sample test for comparison of long memory parameters based on ratios of t...
This is the nearly final version of an article presented at HCI 2012 People and Computers XXVI, an a...
Characterizing static random access memories (SRAMs) is difficult but necessary to understandits pro...
This paper deals with the estimation of the long-run variance of a stationary sequence. We extend th...
Many situations call for an estimation of the execution time of applications, e.g., during design or...
Various discussions relating to computers comment on a reasonable extent of random access memory (RA...
Measuring performance and quantifying a performance change are core evaluation techniques in program...
Most software is contructed on the assumption that the programs and data are stored in random access...
In the above raport the usage of the statistical methods to predict the efficiency of the parallel a...
Article first published online: 15 OCT 2012International audienceIn the area of high performance com...
The problem of learning parallel computer performance is investigated in the context of multicore pr...
In the area of high performance computing and embedded systems, numerous code optimisation methods e...
Performance testing is a mean used to evaluate speed of software projects. In an ideal state a proje...
If cluster C1 consists of computers with a faster mean speed than the computers in cluster C2, does ...
Although parallel computers have existed for many years, recently there has been a surge of academic...
AbstractWe construct a two-sample test for comparison of long memory parameters based on ratios of t...
This is the nearly final version of an article presented at HCI 2012 People and Computers XXVI, an a...
Characterizing static random access memories (SRAMs) is difficult but necessary to understandits pro...
This paper deals with the estimation of the long-run variance of a stationary sequence. We extend th...
Many situations call for an estimation of the execution time of applications, e.g., during design or...