We introduce the notion of suspect families of loading problems in the attempt of formalizing situations in which classical learning algorithms based on local optimization are likely to fail (because of local minima or numerical precision problems). We show that any loading problem belonging to a nonsuspect family can be solved with optimal complexity by a canonical form of gradient descent with forced dynamics (i.e., for this class of problems no algorithm exhibits a better computational complexity than a slightly modified form of backpropagation). The analyses of this paper suggest intriguing links between the shape of the error surface attached to parametrical learning systems (like neural networks) and the computational complexity of th...
This paper deals with the computational aspects of neural networks. Specifically, it is suggested th...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We introduce the notion of suspect families of loading problems in the attempt of formalizing situat...
This paper deals with optimal learning and provides a unified viewpoint of most significant results ...
In this paper we study the time complexity of learning in terms of continuous-parametric representat...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
DasGupta B, Hammer B. Hardness of approximation of the loading problem for multi-layered feedforward...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
The effectiveness of connectionist models in emulating intelligent behaviour and solving significant...
The effectiveness of connectionist models in emulating intelligent behaviour and solving significant...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
This paper deals with the computational aspects of neural networks. Specifically, it is suggested th...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We introduce the notion of suspect families of loading problems in the attempt of formalizing situat...
This paper deals with optimal learning and provides a unified viewpoint of most significant results ...
In this paper we study the time complexity of learning in terms of continuous-parametric representat...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algo...
DasGupta B, Hammer B. Hardness of approximation of the loading problem for multi-layered feedforward...
The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to ...
The effectiveness of connectionist models in emulating intelligent behaviour and solving significant...
The effectiveness of connectionist models in emulating intelligent behaviour and solving significant...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
This paper deals with the computational aspects of neural networks. Specifically, it is suggested th...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...