This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algorithm for the optimal learning of feedforward neural networks. The proposed method is founded on a new concept, called "non-suspiciousness", which can be seen as a generalisation of convexity. The algorithm described in this work follows several adaptive strategies in order to avoid possible entrapments into local minima. In many cases the global minimum of the error function can be successfully computed. The paper performs also a useful comparison between the proposed method and a global optimisation algorithm of deterministic type well known in the literature
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation&...
In this paper the authors describe some useful strategies for nonconvex optimisation in order to det...
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
We derive global H^∞ optimal training algorithms for neural networks. These algorithms guarantee the...
Supervised Learning in Multi-Layered Neural Networks (MLNs) has been recently proposed through the w...
Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function...
Backpropagation (BP) is one of the most widely used algorithms for training feed-forward neural netw...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation&...
In this paper the authors describe some useful strategies for nonconvex optimisation in order to det...
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
We derive global H^∞ optimal training algorithms for neural networks. These algorithms guarantee the...
Supervised Learning in Multi-Layered Neural Networks (MLNs) has been recently proposed through the w...
Abstract. This paper presents a novel Quasi-Newton method for the minimization of the error function...
Backpropagation (BP) is one of the most widely used algorithms for training feed-forward neural netw...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee t...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate ...