We provide fast algorithms for overconstrained `p regression and related problems: for an n × d input matrix A and vector b ∈ Rn, in O(nd log n) time we reduce the problem minx∈Rd ‖Ax − b‖p to the same problem with input matrix A ̃ of dimension s × d and corresponding b ̃ of dimension s × 1. Here, A ̃ and b ̃ are a coreset for the problem, consisting of sampled and rescaled rows of A and b; and s is independent of n and polynomial in d. Our results improve on the best previous algorithms when n d, for all p ∈ [1,∞) except p = 2; in particular, they improve the O(nd1.376+) running time of Sohler and Woodruff (STOC, 2011) for p = 1, that uses asymptot-ically fast matrix multiplication, and the O(nd5 log n) time of Dasgupta et al. (SICOMP, 20...
Oblivious low-distortion subspace embeddings are a crucial building block for numerical linear algeb...
Algorithms such as Least Median of Squares (LMedS) and Ran-dom Sample Consensus (RANSAC) have been v...
A central problem in approximation theory is the concise representation of functions. Given a functi...
Motivated by the desire to extend fast randomized techniques to nonlinear lp re-gression, we conside...
Minimization of the L∞ norm, which can be viewed as approximately solving the non-convex least media...
Sketching has emerged as a powerful technique for speeding up problems in numerical linear algebra, ...
We study the problem of distribution to real regression, where one aims to regress a map-ping f that...
The $\ell_p$-norm regression problem is a classic problem in optimization with wide ranging applicat...
Large datasets upon which classical statistical analysis cannot be performed because of the curse of...
In this paper, we present novel constructions of matrices with the restricted isometry property (RIP...
Large datasets upon which classical statistical analysis cannot be performed because of the curse of...
Oblivious low-distortion subspace embeddings are a crucial building block for numerical linear al-ge...
We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data...
Low-distortion embeddings are critical building blocks for developing random sampling and random pro...
We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data...
Oblivious low-distortion subspace embeddings are a crucial building block for numerical linear algeb...
Algorithms such as Least Median of Squares (LMedS) and Ran-dom Sample Consensus (RANSAC) have been v...
A central problem in approximation theory is the concise representation of functions. Given a functi...
Motivated by the desire to extend fast randomized techniques to nonlinear lp re-gression, we conside...
Minimization of the L∞ norm, which can be viewed as approximately solving the non-convex least media...
Sketching has emerged as a powerful technique for speeding up problems in numerical linear algebra, ...
We study the problem of distribution to real regression, where one aims to regress a map-ping f that...
The $\ell_p$-norm regression problem is a classic problem in optimization with wide ranging applicat...
Large datasets upon which classical statistical analysis cannot be performed because of the curse of...
In this paper, we present novel constructions of matrices with the restricted isometry property (RIP...
Large datasets upon which classical statistical analysis cannot be performed because of the curse of...
Oblivious low-distortion subspace embeddings are a crucial building block for numerical linear al-ge...
We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data...
Low-distortion embeddings are critical building blocks for developing random sampling and random pro...
We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data...
Oblivious low-distortion subspace embeddings are a crucial building block for numerical linear algeb...
Algorithms such as Least Median of Squares (LMedS) and Ran-dom Sample Consensus (RANSAC) have been v...
A central problem in approximation theory is the concise representation of functions. Given a functi...