Natural gradient learning is an efficient and principled method for improving online learning. In practical applications there will be an increased cost required in estimating and inverting the Fisher information matrix. We propose to use the matrix momentum algorithm in order to carry out efficient inversion and study the efficacy of a single step estimation of the Fisher information matrix. We analyse the proposed algorithms in a two-layer neural network, using a statistical mechanics framework which allows one to describe analytically the learning dynamics, and compare performance with true natural gradient learning and standard gradient descent
Momentum based learning algorithms are one of the most successful learning algorithms in both convex...
This paper applies natural gradient (NG) learning neural networks (NNs) for modeling and identificat...
The natural gradient descent method is applied to train an n-m-1 mul-tilayer perceptron. Based on an...
Natural gradient learning is an efficient and principled method for improving online learning. In pr...
Natural gradient learning is an efficient and principled method for improving on-line learning. In p...
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer...
We analyse the matrix momentum algorithm, which provides an efficient approximation to on-line Newto...
We analyse natural gradient learning in a two-layer feed-forward neural network using a statistical ...
We analyse the matrix momentum algorithm, which provides an efficient approx-imation to on-line Newt...
When a parameter space has a certain underlying structure, the ordinary gradient of a function does ...
Natural gradient descent (NGD) is an on-line algorithm for redefining the steepest descent direction...
The momentum parameter is common within numerous optimization and local search algorithms, particula...
Recently, the popularity of deep artificial neural networks has increased considerably. Generally, t...
Second-order optimization methods, such as nat-ural gradient, are difficult to apply to high-dimensi...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...
Momentum based learning algorithms are one of the most successful learning algorithms in both convex...
This paper applies natural gradient (NG) learning neural networks (NNs) for modeling and identificat...
The natural gradient descent method is applied to train an n-m-1 mul-tilayer perceptron. Based on an...
Natural gradient learning is an efficient and principled method for improving online learning. In pr...
Natural gradient learning is an efficient and principled method for improving on-line learning. In p...
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer...
We analyse the matrix momentum algorithm, which provides an efficient approximation to on-line Newto...
We analyse natural gradient learning in a two-layer feed-forward neural network using a statistical ...
We analyse the matrix momentum algorithm, which provides an efficient approx-imation to on-line Newt...
When a parameter space has a certain underlying structure, the ordinary gradient of a function does ...
Natural gradient descent (NGD) is an on-line algorithm for redefining the steepest descent direction...
The momentum parameter is common within numerous optimization and local search algorithms, particula...
Recently, the popularity of deep artificial neural networks has increased considerably. Generally, t...
Second-order optimization methods, such as nat-ural gradient, are difficult to apply to high-dimensi...
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coe...
Momentum based learning algorithms are one of the most successful learning algorithms in both convex...
This paper applies natural gradient (NG) learning neural networks (NNs) for modeling and identificat...
The natural gradient descent method is applied to train an n-m-1 mul-tilayer perceptron. Based on an...