Generative autoencoders are designed to model a target distribution with the aim of generating samples and it has also been shown that specific non-generative autoencoders (i.e. contractive and denoising autoencoders) can be turned into gen-erative models using reinjections (i.e. iterative sampling). In this work, we provide mathematical evidence that any autoencoder reproducing the input data with a loss of information can sample from the training distribution using reinjections. More precisely, we prove that the property of modeling a given distribution and sampling from it not only applies to contractive and denoising autoencoders but also to all lossy autoencoders. In accordance with previous results, we emphasize that the reinjection s...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Denoising autoencoders (DAE) are trained to reconstruct their clean inputs with noise injected at th...
Generative autoencoders are designed to model a target distribution with the aim of generating sampl...
We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly ...
The contractive auto-encoder learns a rep-resentation of the input data that captures the local mani...
Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts o...
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests ...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which i...
Generative deep neural networks used in machine learning, like the Variational Auto-Encoders (VAE), ...
A key advance in learning generative models is the use of amortized inference distributions that are...
Deep generative models of text have shown great success on a wide range of conditional and unconditi...
Denoising autoencoders (DAEs) are powerful deep learning models used for feature extraction, data ge...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Denoising autoencoders (DAE) are trained to reconstruct their clean inputs with noise injected at th...
Generative autoencoders are designed to model a target distribution with the aim of generating sampl...
We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly ...
The contractive auto-encoder learns a rep-resentation of the input data that captures the local mani...
Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts o...
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests ...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which i...
Generative deep neural networks used in machine learning, like the Variational Auto-Encoders (VAE), ...
A key advance in learning generative models is the use of amortized inference distributions that are...
Deep generative models of text have shown great success on a wide range of conditional and unconditi...
Denoising autoencoders (DAEs) are powerful deep learning models used for feature extraction, data ge...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Denoising autoencoders (DAE) are trained to reconstruct their clean inputs with noise injected at th...