ABSTRACT. We reconsider randomized algorithms for the low-rank approximation of symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel matrices that arise in data analysis and ma-chine learning applications. Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices. Our results highlight complementary aspects of sampling versus projection methods; they characterize the effects of common data preprocessing steps on the performance of these algorithms; and they point to important differences between uniform sampling and nonuniform sampling methods based on leverage scores. In addition, our empirical results illus...
In many areas of machine learning, it becomes necessary to find the eigenvector decompositions of la...
Abstract. A classical problem in matrix computations is the efficient and reliable approximation of ...
Leverage score sampling provides an appealing way to perform approximate computations for large matr...
We reconsider randomized algorithms for the low-rank approximation of SPSD matrices such as Laplacia...
Positive semidefinite matrices arise in a variety of fields, including statistics, signal processing...
Positive semi-definite matrices commonly occur as normal matrices of least squares problems in stati...
Abstract—We develop two approaches for analyzing the ap-proximation error bound for the Nyström met...
Low-rank matrix approximation is an effective tool in alleviating the memory and computational burde...
In this thesis, we investigate how well we can reconstruct the best rank-? approximation of a large ...
Pervasive and networked computers have dramatically reduced the cost of collecting and distributing ...
We follow a learning theory viewpoint to study a family of learning schemes for regression related t...
This book presents a unified theory of random matrices for applications in machine learning, offerin...
One approach to improving the running time of kernel-based machine learning methods is to build a sm...
This survey describes probabilistic algorithms for linear algebraic computations, such as factorizin...
We live in an age of big data. Analyzing modern data sets can be very difficult because they usually...
In many areas of machine learning, it becomes necessary to find the eigenvector decompositions of la...
Abstract. A classical problem in matrix computations is the efficient and reliable approximation of ...
Leverage score sampling provides an appealing way to perform approximate computations for large matr...
We reconsider randomized algorithms for the low-rank approximation of SPSD matrices such as Laplacia...
Positive semidefinite matrices arise in a variety of fields, including statistics, signal processing...
Positive semi-definite matrices commonly occur as normal matrices of least squares problems in stati...
Abstract—We develop two approaches for analyzing the ap-proximation error bound for the Nyström met...
Low-rank matrix approximation is an effective tool in alleviating the memory and computational burde...
In this thesis, we investigate how well we can reconstruct the best rank-? approximation of a large ...
Pervasive and networked computers have dramatically reduced the cost of collecting and distributing ...
We follow a learning theory viewpoint to study a family of learning schemes for regression related t...
This book presents a unified theory of random matrices for applications in machine learning, offerin...
One approach to improving the running time of kernel-based machine learning methods is to build a sm...
This survey describes probabilistic algorithms for linear algebraic computations, such as factorizin...
We live in an age of big data. Analyzing modern data sets can be very difficult because they usually...
In many areas of machine learning, it becomes necessary to find the eigenvector decompositions of la...
Abstract. A classical problem in matrix computations is the efficient and reliable approximation of ...
Leverage score sampling provides an appealing way to perform approximate computations for large matr...