We investigate a new approach to compute the gradients of artificial neural networks (ANNs), based on the so-called push-out likelihood ratio method. Unlike the widely used backpropagation (BP) method that requires continuity of the loss function and the activation function, our approach bypasses this requirement by injecting artificial noises into the signals passed along the neurons. We show how this approach has a similar computational complexity as BP, and moreover is more advantageous in terms of removing the backward recursion and eliciting transparent formulas. We also formalize the connection between BP, a pivotal technique for training ANNs, and infinitesimal perturbation analysis, a classic path-wise derivative estimation approach...
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. ...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
While backpropagation (BP) is the mainstream approach for gradient computation in neural network tra...
Abstract. Recently, we proposed to transform the outputs of each hidden neu-ron in a multi-layer per...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
Learning in biological and artificial neural networks is often framed as a problem in which targeted...
This is the final version of the article. It first appeared from International Conference on Learnin...
Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron net...
International audienceDeep neural networks are robust against random corruptions of the inputs to so...
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
This paper presents some simple techniques to improve the backpropagation algorithm. Since learning ...
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. ...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
While backpropagation (BP) is the mainstream approach for gradient computation in neural network tra...
Abstract. Recently, we proposed to transform the outputs of each hidden neu-ron in a multi-layer per...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this paper we explore different strategies to guide backpropagation algorithm used for training a...
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potenti...
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
Learning in biological and artificial neural networks is often framed as a problem in which targeted...
This is the final version of the article. It first appeared from International Conference on Learnin...
Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron net...
International audienceDeep neural networks are robust against random corruptions of the inputs to so...
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
This paper presents some simple techniques to improve the backpropagation algorithm. Since learning ...
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. ...
International audienceBackpropagating gradients through random variables is at the heart of numerous...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...