Abstract: The quasi-Newton training method is the most effective method for feed-forward neural networks with respect to the training precision. This method is well-known and popularly described in the neural networks literature. Nevertheless its implementation contains some difficulties because of the specific shape of the cost function and the large amount of variables. Here we give in sufficient details an example of a program implementation of the quasi-Newton method. This implementation (as a Borland Delphi application) seems to work well with various examples
While first-order methods are popular for solving optimization problems that arise in large-scale de...
30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016The rest...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
This paper presents a new learning algorithm for training fully-connected, feedforward artificial ne...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
In this paper, we present a new class of quasi-Newton methods for the effective learning in large mu...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
The training of multilayer perceptron is generally a difficult task. Excessive training times and la...
This study focuses on the Nesterov's accelerated quasi-Newton (NAQ) method in the context of deep ne...
The codebase of the master thesis 'Training of Nonsmooth Neural Networks as an Inverse Problem - Dif...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
While first-order methods are popular for solving optimization problems that arise in large-scale de...
30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016The rest...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...
Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
This paper presents a new learning algorithm for training fully-connected, feedforward artificial ne...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
In this paper, we present a new class of quasi-Newton methods for the effective learning in large mu...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
The training of multilayer perceptron is generally a difficult task. Excessive training times and la...
This study focuses on the Nesterov's accelerated quasi-Newton (NAQ) method in the context of deep ne...
The codebase of the master thesis 'Training of Nonsmooth Neural Networks as an Inverse Problem - Dif...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
While first-order methods are popular for solving optimization problems that arise in large-scale de...
30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016The rest...
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are...