In this paper the authors describe some useful strategies for nonconvex optimisation in order to determine the global minimum of the error function of a Multi-Layer Perceptron. The proposed approach is founded on a new concept, called "non suspiciousness", which can be seen as a generalisation of convexity. Relations both with classical unconstrained optimisation results and with recent contributions in the field of supervised neural networks are examined. The preliminary numerical experiences show that the ideas behind the illustrated algorithm are interesting, although they require further investigation
We present trust-region methods for the general unconstrained minimization problem. Trust-region alg...
This thesis addresses the issue of applying a "globally" convergent optimization scheme to the train...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
In this paper the authors describe some useful strategies for nonconvex optimisation in order to det...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
Solving large scale optimization problems, such as neural networks training, can present many challe...
© 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this a...
The Nelder--Mead simplex algorithm (J. A. Nelder and R. Meade, Computer Journal, vol 7, pages 308-- ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
This paper proposes a set of new error criteria and a learning approach, called Adaptive Normalized ...
Presented on February 11, 2019 at 11:00 a.m. as part of the ARC12 Distinguished Lecture in the Klaus...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
We present trust-region methods for the general unconstrained minimization problem. Trust-region alg...
This thesis addresses the issue of applying a "globally" convergent optimization scheme to the train...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...
In this paper the authors describe some useful strategies for nonconvex optimisation in order to det...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
Solving large scale optimization problems, such as neural networks training, can present many challe...
© 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this a...
The Nelder--Mead simplex algorithm (J. A. Nelder and R. Meade, Computer Journal, vol 7, pages 308-- ...
In this paper the problem of neural network training is formulated as the unconstrained minimization...
This paper proposes a set of new error criteria and a learning approach, called Adaptive Normalized ...
Presented on February 11, 2019 at 11:00 a.m. as part of the ARC12 Distinguished Lecture in the Klaus...
A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The ...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
We present trust-region methods for the general unconstrained minimization problem. Trust-region alg...
This thesis addresses the issue of applying a "globally" convergent optimization scheme to the train...
The paper presents an overview of global issues in optimization methods for Supervised Learning (SL...