As nowadays Machine Learning (ML) techniques are generating huge data collections, the problem of how to efficiently engineer their storage and operations is becoming of paramount importance. In this article we propose a new lossless compression scheme for real-valued matrices which achieves efficient performance in terms of compression ratio and time for linear-algebra operations. Ex- periments show that, as a compressor, our tool is clearly superior to gzip and it is usually within 20% of xz in terms of compression ratio. In addition, our compressed format supports matrix-vector multiplications in time and space proportional to the size of the compressed representation, unlike gzip and xz that require the full decompression of ...
The last few years have seen an exponential increase, driven by many disparate fields such as big da...
We describe a block-sorting, lossless data compression algorithm, and our implementation of that alg...
The compression-complexity trade-off of lossy compression algorithms that are based on a random code...
This work is comprised of two different projects in numerical linear algebra. The first project is a...
In this thesis we seek to make advances towards the goal of effective learned compression. This enta...
The biggest cost of computing with large matrices in any modern computer is related to memory latenc...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
19 April 2022 A Correction to this paper has been published: https://doi.org/10.1007/s42979-022-011...
In this paper we investigate the execution of Ab and A^T b, where A is a sparse matrix and b a dense...
Abstract—Sparse matrix-vector multiplication (SpM×V) has been characterized as one of the most signi...
When pairwise dot products are computed between input embedding vectors and the dot product is used ...
In edge computing, suppressing data size is a challenge for machine learning models that perform com...
In this dissertation we have identified vector processing shortcomings related to the efficient stor...
We examine the compression-complexity trade-off of lossy compression algorithms that are based on a ...
In this article, we introduce a cache-oblivious method for sparse matrix–vector multiplication. Our ...
The last few years have seen an exponential increase, driven by many disparate fields such as big da...
We describe a block-sorting, lossless data compression algorithm, and our implementation of that alg...
The compression-complexity trade-off of lossy compression algorithms that are based on a random code...
This work is comprised of two different projects in numerical linear algebra. The first project is a...
In this thesis we seek to make advances towards the goal of effective learned compression. This enta...
The biggest cost of computing with large matrices in any modern computer is related to memory latenc...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
19 April 2022 A Correction to this paper has been published: https://doi.org/10.1007/s42979-022-011...
In this paper we investigate the execution of Ab and A^T b, where A is a sparse matrix and b a dense...
Abstract—Sparse matrix-vector multiplication (SpM×V) has been characterized as one of the most signi...
When pairwise dot products are computed between input embedding vectors and the dot product is used ...
In edge computing, suppressing data size is a challenge for machine learning models that perform com...
In this dissertation we have identified vector processing shortcomings related to the efficient stor...
We examine the compression-complexity trade-off of lossy compression algorithms that are based on a ...
In this article, we introduce a cache-oblivious method for sparse matrix–vector multiplication. Our ...
The last few years have seen an exponential increase, driven by many disparate fields such as big da...
We describe a block-sorting, lossless data compression algorithm, and our implementation of that alg...
The compression-complexity trade-off of lossy compression algorithms that are based on a random code...