We provide guarantees for learning latent variable models emphasizing on the overcomplete regime, where the dimensionality of the latent space can exceed the observed dimensionality. In particular, we consider multiview mixtures, spherical Gaussian mixtures, ICA, and sparse coding models. We provide tight concentration bounds for empirical moments through novel covering arguments. We analyze parameter recovery through a simple tensor power update algorithm. In the semi-supervised setting, we exploit the label or prior information to get a rough estimate of the model parameters, and then refine it using the tensor method on unlabeled samples. We establish that learning is possible when the number of components scales as $k=o(d^{p/2})$, where...
In this paper we show that very large mixtures of Gaussians are efficiently learnable in high dimens...
Spectral methods have greatly advanced the es-timation of latent variable models, generating a seque...
Abstract A simple alternating rank-1 update procedure is considered for CP tensor decomposition. Loc...
We present a novel analysis of the dynamics of tensor power iterations in the overcomplete regime wh...
We present a novel analysis of the dynamics of tensor power iterations in the overcomplete regime wh...
This note is a short version of that in [1]. It is intended as a survey for the 2015 Algorithmic Lea...
In the last decade, machine learning algorithms have been substantially developed and they have gain...
This work considers a computationally and statistically efficient parameter estimation method for a ...
Unsupervised learning aims at the discovery of hidden structure that drives the observations in the ...
In a latent variable model, an overcomplete representation is one in which the number of latent vari...
This work considers a computationally and statistically efficient parameter estimation method for a ...
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidde...
This work considers a computationally and statistically efficient parameter estimation method for a ...
We propose a new algorithm for tensor decomposition, based on \algname~algorithm, and apply our new ...
We propose a new algorithm for tensor decomposition, based on \algname~algorithm, and apply our new ...
In this paper we show that very large mixtures of Gaussians are efficiently learnable in high dimens...
Spectral methods have greatly advanced the es-timation of latent variable models, generating a seque...
Abstract A simple alternating rank-1 update procedure is considered for CP tensor decomposition. Loc...
We present a novel analysis of the dynamics of tensor power iterations in the overcomplete regime wh...
We present a novel analysis of the dynamics of tensor power iterations in the overcomplete regime wh...
This note is a short version of that in [1]. It is intended as a survey for the 2015 Algorithmic Lea...
In the last decade, machine learning algorithms have been substantially developed and they have gain...
This work considers a computationally and statistically efficient parameter estimation method for a ...
Unsupervised learning aims at the discovery of hidden structure that drives the observations in the ...
In a latent variable model, an overcomplete representation is one in which the number of latent vari...
This work considers a computationally and statistically efficient parameter estimation method for a ...
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidde...
This work considers a computationally and statistically efficient parameter estimation method for a ...
We propose a new algorithm for tensor decomposition, based on \algname~algorithm, and apply our new ...
We propose a new algorithm for tensor decomposition, based on \algname~algorithm, and apply our new ...
In this paper we show that very large mixtures of Gaussians are efficiently learnable in high dimens...
Spectral methods have greatly advanced the es-timation of latent variable models, generating a seque...
Abstract A simple alternating rank-1 update procedure is considered for CP tensor decomposition. Loc...