We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. We define generative autoencoders as autoencoders which are trained to softly enforce a prior on the latent distribution learned by the model. However, the model does not necessarily learn to match the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively encoding and decoding, which allows us to sample from the learned latent distribution. Using this we can improve the quality of samples drawn from the model, especially when the learned distribution is far from the prior. Using MCMC sampling, we also reveal previously unseen differences between...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
In this thesis, we investigate various approaches for generative modeling, with a special emphasis o...
Generative autoencoders are designed to model a target distribution with the aim of generating sampl...
Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which i...
Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts o...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
A key advance in learning generative models is the use of amortized inference distributions that are...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We consider the problem of learning deep gener-ative models from data. We formulate a method that ge...
We introduce a deep, generative autoencoder ca-pable of learning hierarchies of distributed rep-rese...
Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximiz...
Despite the recent progress in machine learning and deep learning, unsupervised learning still remai...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
In this thesis, we investigate various approaches for generative modeling, with a special emphasis o...
Generative autoencoders are designed to model a target distribution with the aim of generating sampl...
Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which i...
Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts o...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
A key advance in learning generative models is the use of amortized inference distributions that are...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We consider the problem of learning deep gener-ative models from data. We formulate a method that ge...
We introduce a deep, generative autoencoder ca-pable of learning hierarchies of distributed rep-rese...
Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximiz...
Despite the recent progress in machine learning and deep learning, unsupervised learning still remai...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
In this thesis, we investigate various approaches for generative modeling, with a special emphasis o...