We consider optimization problems in which the goal is find a $k$-dimensional subspace of $\reals^n$, $k<<n$, which minimizes a convex and smooth loss. Such problemsgeneralize the fundamental task of principal component analysis (PCA) to include robust and sparse counterparts, and logistic PCA for binary data, among others. While this problem is not convex it admits natural algorithms with very efficient iterations and memory requirements, which is highly desired in high-dimensional regimes however, arguing about their fast convergence to a global optimal solution is difficult. On the other hand, there exists a simple convex relaxation for which convergence to the global optimum is straightforward, however corresponding algorithms are not e...
State of the art statistical estimators for high-dimensional problems take the form of regularized, ...
Abstract—In recent work, robust PCA has been posed as a problem of recovering a low-rank matrix L an...
Many statistical learning problems can be posed as minimization of a sum of two convex functions, on...
Sparse principal component analysis (PCA) involves nonconvex optimization for which the global solut...
We present the Sequential Subspace Optimization (SESOP) method for large-scale smooth unconstrained ...
The problem of principle component analysis (PCA) is traditionally solved by spectral or algebraic m...
We provide statistical and computational analysis of sparse Principal Component Analysis (PCA) in hi...
Principal component analysis (PCA) finds the best linear representation of data and is an indispensa...
This paper presents an acceleration of the optimal subgradient algorithm OSGA [30] for solving conve...
In this paper, we investigate a parallel subspace correction framework for composite convex optimiza...
This paper describes two optimal subgradient algorithms for solving structured large-scale convex co...
We first propose an adaptive accelerated prox-imal gradient (APG) method for minimizing strongly con...
In this paper, we discuss methods to refine locally optimal solutions of sparse PCA. Starting from a...
We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corru...
Every algorithmic learning problem becomes vastly more tractable when reduced to a convex program, y...
State of the art statistical estimators for high-dimensional problems take the form of regularized, ...
Abstract—In recent work, robust PCA has been posed as a problem of recovering a low-rank matrix L an...
Many statistical learning problems can be posed as minimization of a sum of two convex functions, on...
Sparse principal component analysis (PCA) involves nonconvex optimization for which the global solut...
We present the Sequential Subspace Optimization (SESOP) method for large-scale smooth unconstrained ...
The problem of principle component analysis (PCA) is traditionally solved by spectral or algebraic m...
We provide statistical and computational analysis of sparse Principal Component Analysis (PCA) in hi...
Principal component analysis (PCA) finds the best linear representation of data and is an indispensa...
This paper presents an acceleration of the optimal subgradient algorithm OSGA [30] for solving conve...
In this paper, we investigate a parallel subspace correction framework for composite convex optimiza...
This paper describes two optimal subgradient algorithms for solving structured large-scale convex co...
We first propose an adaptive accelerated prox-imal gradient (APG) method for minimizing strongly con...
In this paper, we discuss methods to refine locally optimal solutions of sparse PCA. Starting from a...
We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corru...
Every algorithmic learning problem becomes vastly more tractable when reduced to a convex program, y...
State of the art statistical estimators for high-dimensional problems take the form of regularized, ...
Abstract—In recent work, robust PCA has been posed as a problem of recovering a low-rank matrix L an...
Many statistical learning problems can be posed as minimization of a sum of two convex functions, on...