We consider learning the principal subspace of a large set of vectors from an extremely small number of compressive measurements of each vector. Our theoretical results show that even a constant number of measurements per column suffices to approximate the principal subspace to arbitrary precision, provided that the number of vectors is large. This result is achieved by a simple algorithm that computes the eigenvectors of an estimate of the covariance matrix. The main insight is to exploit an averaging effect that arises from applying a different random projection to each vector. We provide a number of simulations confirming our theoretical results.
We give a new, very general, formulation of the compressed sensing problem in terms of coordinate pr...
Recently, a novel subspace decomposition method, termed 'Stationary Subspace Analysis' (SSA), has be...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...
<p>We consider learning the principal subspace of a large set of vectors from an extremely small num...
We study sparse principal components analysis in high dimensions, where p (the number of variables) ...
A large number of algorithms in machine learning, from principal component analysis (PCA), and its n...
Abstract—We describe ways to define and calculate-norm signal subspaces that are less sensitive to o...
Robust Principal Component Analysis (PCA) (or robust subspace recovery) is a particularly important ...
© 2016 NIPS Foundation - All Rights Reserved. We address the problem of recovering a high-dimensiona...
We study the problem of estimating the leading eigenvectors of a high-dimensional populatio...
In this paper, a randomized PCA algorithm that is robust to the presence of outliers and whose compl...
Abstract — Lower dimensional signal representation schemes frequently assume that the signal of inte...
International audienceWe consider the problem of subspace estimation in situations where the number ...
We study the problem of estimating the leading eigenvectors of a high-dimensional population covaria...
We study the problem of estimating the leading eigenvectors of a high-dimensional populatio...
We give a new, very general, formulation of the compressed sensing problem in terms of coordinate pr...
Recently, a novel subspace decomposition method, termed 'Stationary Subspace Analysis' (SSA), has be...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...
<p>We consider learning the principal subspace of a large set of vectors from an extremely small num...
We study sparse principal components analysis in high dimensions, where p (the number of variables) ...
A large number of algorithms in machine learning, from principal component analysis (PCA), and its n...
Abstract—We describe ways to define and calculate-norm signal subspaces that are less sensitive to o...
Robust Principal Component Analysis (PCA) (or robust subspace recovery) is a particularly important ...
© 2016 NIPS Foundation - All Rights Reserved. We address the problem of recovering a high-dimensiona...
We study the problem of estimating the leading eigenvectors of a high-dimensional populatio...
In this paper, a randomized PCA algorithm that is robust to the presence of outliers and whose compl...
Abstract — Lower dimensional signal representation schemes frequently assume that the signal of inte...
International audienceWe consider the problem of subspace estimation in situations where the number ...
We study the problem of estimating the leading eigenvectors of a high-dimensional population covaria...
We study the problem of estimating the leading eigenvectors of a high-dimensional populatio...
We give a new, very general, formulation of the compressed sensing problem in terms of coordinate pr...
Recently, a novel subspace decomposition method, termed 'Stationary Subspace Analysis' (SSA), has be...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...