Low-rank approximations are essential in modern data science. The interpolative decomposition provides one such approximation. Its distinguishing feature is that it reuses columns from the original matrix. This enables it to preserve matrix properties such as sparsity and non-negativity. It also helps save space in memory. In this work, we introduce two optimized algorithms to construct an interpolative decomposition along with numerical evidence that they outperform the current state of the art.Comment: Disclaimer: we do not have any experiments on very large matrices, so these findings are only conclusive for relatively small matrice
We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse m...
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-ra...
It is known that the decomposition in low-rank and sparse matrices (\textbf{L+S} for short) can be a...
In this paper, we introduce a probabilistic model for learning interpolative decomposition (ID), whi...
A common approach for compressing large-scale data is through matrix sketching. In this work, we con...
Despite the prominence of neural network approaches in the field of recommender systems, simple meth...
In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern t...
In this thesis, we investigate how well we can reconstruct the best rank-? approximation of a large ...
Low-rank approximations which are computed from selected rows and columns of a given data matrix hav...
In undergraduates numerical mathematics courses I was strongly warned that inverting a matrix for co...
The era of huge data necessitates highly efficient machine learning algorithms. Many common machine ...
Matrices of huge size and low rank are encountered in applications from the real world where large s...
The discrete empirical interpolation method (DEIM) may be used as an index selection strategy for fo...
This thesis is focused on using low rank matrices in numerical mathematics. We introduce conjugate g...
LU and Cholesky matrix factorization algorithms are core subroutines used to solve systems of linear...
We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse m...
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-ra...
It is known that the decomposition in low-rank and sparse matrices (\textbf{L+S} for short) can be a...
In this paper, we introduce a probabilistic model for learning interpolative decomposition (ID), whi...
A common approach for compressing large-scale data is through matrix sketching. In this work, we con...
Despite the prominence of neural network approaches in the field of recommender systems, simple meth...
In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern t...
In this thesis, we investigate how well we can reconstruct the best rank-? approximation of a large ...
Low-rank approximations which are computed from selected rows and columns of a given data matrix hav...
In undergraduates numerical mathematics courses I was strongly warned that inverting a matrix for co...
The era of huge data necessitates highly efficient machine learning algorithms. Many common machine ...
Matrices of huge size and low rank are encountered in applications from the real world where large s...
The discrete empirical interpolation method (DEIM) may be used as an index selection strategy for fo...
This thesis is focused on using low rank matrices in numerical mathematics. We introduce conjugate g...
LU and Cholesky matrix factorization algorithms are core subroutines used to solve systems of linear...
We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse m...
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-ra...
It is known that the decomposition in low-rank and sparse matrices (\textbf{L+S} for short) can be a...