We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previ-ous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised func-tion approximation, with gradients that can be obtained by backprop. We provide theorems that...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-ar...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
This is the final version of the article. It first appeared from International Conference on Learnin...
Generative Stochastic Networks (GSNs) have been recently introduced as an al-ternative to traditiona...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-ar...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (G...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
This is the final version of the article. It first appeared from International Conference on Learnin...
Generative Stochastic Networks (GSNs) have been recently introduced as an al-ternative to traditiona...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-ar...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...