We provide the first experimental results on non-synthetic datasets for the quasi-diagonal Riemannian gradient descents for neural networks introduced in [Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a previously unpublished electroencephalogram dataset. The quasi-diagonal Riemannian algorithms consistently beat simple stochastic gradient gradient descents by a varying margin. The computational overhead with respect to simple backpropagation is around a factor $2$. Perhaps more interestingly, these methods also reach their final performance quickly, thus requiring fewer training epochs and a smaller total computation time. We also present an implementation guide to these Riemannian gradient descents for neura...
In this dissertation, we are concerned with the advancement of optimization algorithms for training ...
Neural SDEs combine many of the best qualities of both RNNs and SDEs: memory efficient training, hig...
Recently, several studies have proven the global convergence and generalization abilities of the gra...
We provide the first experimental results on non-synthetic datasets for the quasi-diagonal Riemannia...
The parameter space of neural networks has the Riemannian metric structure. The natural Riemannian g...
International audienceWe describe four algorithms for neural network training, each adapted to diffe...
International audienceRecurrent neural networks are powerful models for sequential data, ableto repr...
Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks...
Symmetric Positive Definite (SPD) matrix learning methods have become popular in many image and vide...
Natural gradient descent (NGD) is an on-line algorithm for redefining the steepest descent direction...
The aim of this contribution is to present a tutorial on learning algorithms for a single neural lay...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
Bayesian inference tells us how we can incorporate information from the data into the parameters. In...
This paper proposes the Mesh Neural Network (MNN), a novel architecture which allows neurons to be c...
Synaptic plasticity is the primary physiological mechanism underlying learning in the brain. It is d...
In this dissertation, we are concerned with the advancement of optimization algorithms for training ...
Neural SDEs combine many of the best qualities of both RNNs and SDEs: memory efficient training, hig...
Recently, several studies have proven the global convergence and generalization abilities of the gra...
We provide the first experimental results on non-synthetic datasets for the quasi-diagonal Riemannia...
The parameter space of neural networks has the Riemannian metric structure. The natural Riemannian g...
International audienceWe describe four algorithms for neural network training, each adapted to diffe...
International audienceRecurrent neural networks are powerful models for sequential data, ableto repr...
Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks...
Symmetric Positive Definite (SPD) matrix learning methods have become popular in many image and vide...
Natural gradient descent (NGD) is an on-line algorithm for redefining the steepest descent direction...
The aim of this contribution is to present a tutorial on learning algorithms for a single neural lay...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
Bayesian inference tells us how we can incorporate information from the data into the parameters. In...
This paper proposes the Mesh Neural Network (MNN), a novel architecture which allows neurons to be c...
Synaptic plasticity is the primary physiological mechanism underlying learning in the brain. It is d...
In this dissertation, we are concerned with the advancement of optimization algorithms for training ...
Neural SDEs combine many of the best qualities of both RNNs and SDEs: memory efficient training, hig...
Recently, several studies have proven the global convergence and generalization abilities of the gra...