In previous work it was found that cache blocking of sparse matrix vector multiplication yielded significant performance improvements (upto 700 % on some matrix and platform combinations) however deciding when to apply the optimization is a non-trivial problem. This paper applies four different statistical learning techniques to explore this classification problem. The statistical techniques used are naive Bayes classifiers, logistic regression, support vector machines with linear kernels, and support vector machines with polynomial kernels. The results show that the support vector machines with polynomial kernels yield the best results. This paper also reasons about the distribution of the data from the differences in accuracy of the vario...
A probabilistic model to estimate the number of misses on a set associative cache with an LRU replac...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
Many data mining algorithms rely on eigenvalue computations or iterative linear solvers in which the...
We present new performance models and a new, more compact data structure for cache blocking when ap...
We consider the problem of building high-performance implementations of sparse matrix-vector multipl...
Abstract. We present new performance models and more compact data structures for cache blocking when...
Algorithms for the sparse matrix-vector multiplication (shortly SpMxV) are important building blocks...
Abstract We present new performance models and more compact data structures for cache blocking when ...
In this thesis we introduce a cost measure to compare the cache- friendliness of different permutati...
In this article, we introduce a cache-oblivious method for sparse matrix–vector multiplication. Our ...
In this article, we introduce a cache-oblivious method for sparse matrix–vector multiplication. Our ...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
. Many scientific applications handle compressed sparse matrices. Cache behavior during the executio...
International audienceWe present a method for automatically selecting optimal implementations of spa...
A probabilistic model to estimate the number of misses on a set associative cache with an LRU replac...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
Many data mining algorithms rely on eigenvalue computations or iterative linear solvers in which the...
We present new performance models and a new, more compact data structure for cache blocking when ap...
We consider the problem of building high-performance implementations of sparse matrix-vector multipl...
Abstract. We present new performance models and more compact data structures for cache blocking when...
Algorithms for the sparse matrix-vector multiplication (shortly SpMxV) are important building blocks...
Abstract We present new performance models and more compact data structures for cache blocking when ...
In this thesis we introduce a cost measure to compare the cache- friendliness of different permutati...
In this article, we introduce a cache-oblivious method for sparse matrix–vector multiplication. Our ...
In this article, we introduce a cache-oblivious method for sparse matrix–vector multiplication. Our ...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
. Many scientific applications handle compressed sparse matrices. Cache behavior during the executio...
International audienceWe present a method for automatically selecting optimal implementations of spa...
A probabilistic model to estimate the number of misses on a set associative cache with an LRU replac...
While there are many studies on the locality of dense codes, few deal with the locality of sparse co...
Many data mining algorithms rely on eigenvalue computations or iterative linear solvers in which the...