An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomple...
Many machine learning problems deal with the estimation of conditional probabilities $p(y \mid x)$ f...
Many machine learning problems deal with the estimation of conditional probabilities $p(y \mid x)$ f...
An expectation-maximization algorithm for learning sparse and overcomplete data representations is p...
An expectation-maximization (EM) algorithm for learning sparse and overcomplete representations is p...
In a latent variable model, an overcomplete representation is one in which the number of latent vari...
SIGLEAvailable from British Library Document Supply Centre-DSC:3395.01982(no 8) / BLDSC - British Li...
International audienceIn this paper we address the problem of sparse representation (SR) within a Ba...
International audienceIn this paper we address the problem of sparse representation (SR) within a Ba...
We consider the problem of learning a low-dimensional signal model from a collection of training sam...
We consider the problem of enforcing a sparsity prior in underdetermined linear problems, which is a...
Submitted to EUSIPCO 2011International audienceWe consider the problem of learning a low-dimensional...
Submitted to EUSIPCO 2011International audienceWe consider the problem of learning a low-dimensional...
Given a redundant dictionary of basis vectors (or atoms), our goal is to find maximally sparse repre...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomple...
Many machine learning problems deal with the estimation of conditional probabilities $p(y \mid x)$ f...
Many machine learning problems deal with the estimation of conditional probabilities $p(y \mid x)$ f...
An expectation-maximization algorithm for learning sparse and overcomplete data representations is p...
An expectation-maximization (EM) algorithm for learning sparse and overcomplete representations is p...
In a latent variable model, an overcomplete representation is one in which the number of latent vari...
SIGLEAvailable from British Library Document Supply Centre-DSC:3395.01982(no 8) / BLDSC - British Li...
International audienceIn this paper we address the problem of sparse representation (SR) within a Ba...
International audienceIn this paper we address the problem of sparse representation (SR) within a Ba...
We consider the problem of learning a low-dimensional signal model from a collection of training sam...
We consider the problem of enforcing a sparsity prior in underdetermined linear problems, which is a...
Submitted to EUSIPCO 2011International audienceWe consider the problem of learning a low-dimensional...
Submitted to EUSIPCO 2011International audienceWe consider the problem of learning a low-dimensional...
Given a redundant dictionary of basis vectors (or atoms), our goal is to find maximally sparse repre...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomple...
Many machine learning problems deal with the estimation of conditional probabilities $p(y \mid x)$ f...
Many machine learning problems deal with the estimation of conditional probabilities $p(y \mid x)$ f...