We introduce a novel training principle for probabilistic models that is an al-ternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. Because the transition distribution is a conditional distribution generally involving a small move, it has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is eas-ier to learn, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. The theorems provided here general-ize recent work on the probabilistic interpretation of denoising autoencoders and provide an interes...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
Here, we introduce a new family of probabilistic models called Rectified Gaussian Nets, or RGNs. RGN...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
Generative Stochastic Networks (GSNs) have been recently introduced as an al-ternative to traditiona...
This is the final version of the article. It first appeared from International Conference on Learnin...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-ar...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
Here, we introduce a new family of probabilistic models called Rectified Gaussian Nets, or RGNs. RGN...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
Generative Stochastic Networks (GSNs) have been recently introduced as an al-ternative to traditiona...
This is the final version of the article. It first appeared from International Conference on Learnin...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-ar...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
Here, we introduce a new family of probabilistic models called Rectified Gaussian Nets, or RGNs. RGN...
International audienceBackpropagating gradients through random variables is at the heart of numerous...