This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Incorporating curvature information in stochastic methods has been a challenging task. This paper pr...
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function...
In this paper, we present a new class of quasi-Newton methods for the effective learning in large mu...
Abstract: The quasi-Newton training method is the most effective method for feed-forward neural netw...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
In this work the authors implement in a Multi-Layer Perceptron (MLP) environment a new class of quas...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Technique...
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer...
30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016The rest...
The restricted Boltzmann machine (RBM) has been used as building blocks for many successful deep lea...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Incorporating curvature information in stochastic methods has been a challenging task. This paper pr...
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function...
In this paper, we present a new class of quasi-Newton methods for the effective learning in large mu...
Abstract: The quasi-Newton training method is the most effective method for feed-forward neural netw...
Interest in algorithms which dynamically construct neural networks has been growing in recent years....
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-for...
In this work the authors implement in a Multi-Layer Perceptron (MLP) environment a new class of quas...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Technique...
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer...
30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016The rest...
The restricted Boltzmann machine (RBM) has been used as building blocks for many successful deep lea...
The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
Incorporating curvature information in stochastic methods has been a challenging task. This paper pr...