The Levenberg-Marquardt (LM) learning algorithm is a popular algo-rithm for training neural networks; however, for large neural networks, it becomes prohibitively expensive in terms of running time andmemory requirements. The most time-critical step of the algorithm is the calcula-tion of the Gauss-Newton matrix, which is formed by multiplying two large Jacobian matrices together. We propose a method that uses back-propagation to reduce the time of this matrix-matrix multiplication. This reduces the overall asymptotic running time of the LM algorithm by a factor of the order of the number of output nodes in the neural network.
The back propagation algorithm is the most popular supervised training method for neural networks. I...
This paper presents a new learning algorithm for training fully-connected, feedforward artificial ne...
Newton methods can be applied in many supervised learning approaches. However, for large-scale data,...
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks;...
The problems of artificial neural networks learning and their parallelisation are taken up in this a...
Several methods for training feed-forward neural networks require second order information from the ...
This paper proposes a new Levenberg-Marquardt algorithm that is accelerated by adjusting a Jacobian ...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
An efficient algorithm for the calculation of the approximate Hessian matrix for the Levenberg-Marqu...
Several methods for training feed-forward neural networks require second order information from the...
Deep Learning learning has recently become one of the most predominantly used techniques in the fiel...
Introduction Training algorithms for Multilayer Perceptrons optimize the set of W weights and biase...
We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neur...
The back propagation algorithm calculates the weight changes of artificial neural networks, and a co...
For training fully-connected neural networks (FCNNs), we propose a practical approximate second-orde...
The back propagation algorithm is the most popular supervised training method for neural networks. I...
This paper presents a new learning algorithm for training fully-connected, feedforward artificial ne...
Newton methods can be applied in many supervised learning approaches. However, for large-scale data,...
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks;...
The problems of artificial neural networks learning and their parallelisation are taken up in this a...
Several methods for training feed-forward neural networks require second order information from the ...
This paper proposes a new Levenberg-Marquardt algorithm that is accelerated by adjusting a Jacobian ...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
An efficient algorithm for the calculation of the approximate Hessian matrix for the Levenberg-Marqu...
Several methods for training feed-forward neural networks require second order information from the...
Deep Learning learning has recently become one of the most predominantly used techniques in the fiel...
Introduction Training algorithms for Multilayer Perceptrons optimize the set of W weights and biase...
We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neur...
The back propagation algorithm calculates the weight changes of artificial neural networks, and a co...
For training fully-connected neural networks (FCNNs), we propose a practical approximate second-orde...
The back propagation algorithm is the most popular supervised training method for neural networks. I...
This paper presents a new learning algorithm for training fully-connected, feedforward artificial ne...
Newton methods can be applied in many supervised learning approaches. However, for large-scale data,...