Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti’s well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of “convex algorithm” in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented.
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is pro...
2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, Canada, 24-29 July 20...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
In this paper, we present a new class of quasi-Newton methods for the effective learning in large mu...
Abstract: The quasi-Newton training method is the most effective method for feed-forward neural netw...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Technique...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
In this work the authors implement in a Multi-Layer Perceptron (MLP) environment a new class of quas...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
This paper is concerned with the problem of learning in networks where some or all of the functions ...
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is pro...
2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, Canada, 24-29 July 20...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
In this paper, we present a new class of quasi-Newton methods for the effective learning in large mu...
Abstract: The quasi-Newton training method is the most effective method for feed-forward neural netw...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Technique...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
In this work the authors implement in a Multi-Layer Perceptron (MLP) environment a new class of quas...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforwa...
This paper is concerned with the problem of learning in networks where some or all of the functions ...
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is pro...
2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, Canada, 24-29 July 20...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...