This whitepaper describes the programming techniques used to develop an auto-tuning compression scheme for sparse matrices with respect to accelerating matrix-vector multiplication and minimizing its energy footprint, as well as a method for extracting a power profile from a corresponding implementation of the conjugate gradient method. Using two example systems, we show how these techniques can be leveraged to automatically detect a non-trivial local optimum in the execution parameter space, suggesting that it is feasible to integrate the energy efficiency evaluation of the automatic adaptation with the automatic tuning process. 1
International audienceWe present a method for automatically selecting optimal implementations of spa...
Sparse matrix representations are ubiquitous in computational science and machine learning, leading ...
The paper deals with the energy consumption evaluation of selected Sparse and Dense BLAS Level 1, 2 ...
This whitepaper describes the programming techniques used to develop an auto-tuning compression sche...
This work is a continuation and augmentation of previous energy studies ofCompressed Sparse eXtended...
International audienceMany applications in scientific computing process very large sparse matrices o...
De nombreuses applications en calcul scientifique traitent des matrices creuses de grande taille aya...
Due to copyright restrictions, the access to the full text of this article is only available via sub...
This whitepaper focuses on the study of the conjugate gradient method and how storage formats for sp...
Technology scaling trends have enabled the exponential growth of computing power. However, the perfo...
In high-performance computing, excellent node-level performance is required for the efficient use of...
Multiphysics simulations are at the core of modern Computer Aided Engineering (CAE) allowing the ana...
International audienceSeveral applications in numerical scientific computing involve very large spar...
Sparse kernel performance depends on both the matrix and hardware platform. � Challenges in tuning s...
There are at least three implications of this work. First, sparse AT Ax should be a basic primitive ...
International audienceWe present a method for automatically selecting optimal implementations of spa...
Sparse matrix representations are ubiquitous in computational science and machine learning, leading ...
The paper deals with the energy consumption evaluation of selected Sparse and Dense BLAS Level 1, 2 ...
This whitepaper describes the programming techniques used to develop an auto-tuning compression sche...
This work is a continuation and augmentation of previous energy studies ofCompressed Sparse eXtended...
International audienceMany applications in scientific computing process very large sparse matrices o...
De nombreuses applications en calcul scientifique traitent des matrices creuses de grande taille aya...
Due to copyright restrictions, the access to the full text of this article is only available via sub...
This whitepaper focuses on the study of the conjugate gradient method and how storage formats for sp...
Technology scaling trends have enabled the exponential growth of computing power. However, the perfo...
In high-performance computing, excellent node-level performance is required for the efficient use of...
Multiphysics simulations are at the core of modern Computer Aided Engineering (CAE) allowing the ana...
International audienceSeveral applications in numerical scientific computing involve very large spar...
Sparse kernel performance depends on both the matrix and hardware platform. � Challenges in tuning s...
There are at least three implications of this work. First, sparse AT Ax should be a basic primitive ...
International audienceWe present a method for automatically selecting optimal implementations of spa...
Sparse matrix representations are ubiquitous in computational science and machine learning, leading ...
The paper deals with the energy consumption evaluation of selected Sparse and Dense BLAS Level 1, 2 ...