International audienceBackpropagating gradients through random variables is at the heart of numerous machine learning applications. In this paper, we present a general framework for deriving stochastic backpropagation rules for any distribution, discrete or continuous. Our approach exploits the link between the characteristic function and the Fourier transform, to transport the derivatives from the parameters of the distribution to the random variable. Our method generalizes previously known estimators, and results in new estimators for the gamma, beta, Dirichlet and Laplace distributions. Furthermore, we show that the classical deterministic backproapagation rule and the discrete random variable case, can also be interpreted through stocha...
Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impa...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
We propose a new unbiased stochastic gradient estimator for a family of stochastic models with unifo...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
This is the final version of the article. It first appeared from International Conference on Learnin...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
Digital backpropagation gained popularity due to its ability to combat deterministic nonlinear effec...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
We propose a new way of deriving policy gradient updates for reinforcement learning. Our technique, ...
Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impa...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
We propose a new unbiased stochastic gradient estimator for a family of stochastic models with unifo...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
We introduce a novel training principle for prob-abilistic models that is an alternative to max-imum...
This is the final version of the article. It first appeared from International Conference on Learnin...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised...
We introduce a novel training principle for probabilistic models that is an al-ternative to maximum ...
Digital backpropagation gained popularity due to its ability to combat deterministic nonlinear effec...
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised ...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
International audienceWe study the optimization of a continuous function by its stochastic relaxatio...
Despite of remarkable progress on deep learning, its hardware implementation beyond deep learning ac...
We propose a new way of deriving policy gradient updates for reinforcement learning. Our technique, ...
Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impa...
This paper proposes a backpropagation-based feedforward neural network for learning probability dist...
We propose a new unbiased stochastic gradient estimator for a family of stochastic models with unifo...