There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder’s parameters to respect autoregressive constraints: each input is recon-structed only from previous inputs in a given or-dering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint prob-ability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized...
The contractive auto-encoder learns a rep-resentation of the input data that captures the local mani...
The neural autoregressive distribution estimator(NADE) is a competitive model for the task of densit...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We describe a new approach for modeling the distribution of high-dimensional vectors of dis-crete va...
We introduce a deep, generative autoencoder ca-pable of learning hierarchies of distributed rep-rese...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
We introduce a novel generative autoencoder network model that learns to encode and reconstruct imag...
The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are compet...
The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are compet...
Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding m...
The computation of the distance to the true distribution is a key component of most state-of-the-art...
For stochastic models with intractable likelihood functions, approximate Bayesian computation offers...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly ...
Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-bas...
The contractive auto-encoder learns a rep-resentation of the input data that captures the local mani...
The neural autoregressive distribution estimator(NADE) is a competitive model for the task of densit...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We describe a new approach for modeling the distribution of high-dimensional vectors of dis-crete va...
We introduce a deep, generative autoencoder ca-pable of learning hierarchies of distributed rep-rese...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
We introduce a novel generative autoencoder network model that learns to encode and reconstruct imag...
The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are compet...
The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are compet...
Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding m...
The computation of the distance to the true distribution is a key component of most state-of-the-art...
For stochastic models with intractable likelihood functions, approximate Bayesian computation offers...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly ...
Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-bas...
The contractive auto-encoder learns a rep-resentation of the input data that captures the local mani...
The neural autoregressive distribution estimator(NADE) is a competitive model for the task of densit...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...