We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs. We show that it is NP-complete to decide whether there exist weights and thresholds for the three nodes of this network so that it will produce output con-sistent with a given set of training examples. We extend the result to other simple networks. This result suggests that those looking for perfect training algorithms cannot escape inherent computational difficulties just by considering only simple or very regular networks. It also suggests the importance, given a training problem, of finding an appropriate network and input encoding for that problem. It is left as an open problem to extend our result to nodes with non-line...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
In this paper we discuss training of three-layer neural network classifiers by solving inequalities....
We demonstrate that the problem of training neural networks with small (average) squared error is co...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We consider the computational complexity of learning by neural nets. We are inter- ested in how hard...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We investigate the complexity of the reachability problem for (deep) neuralnetworks: does it compute...
Ellerbrock TM. Multilayer neural networks : learnability, network generation, and network simplifica...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
In this paper we discuss training of three-layer neural network classifiers by solving inequalities....
We demonstrate that the problem of training neural networks with small (average) squared error is co...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We consider the computational complexity of learning by neural nets. We are inter- ested in how hard...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We investigate the complexity of the reachability problem for (deep) neuralnetworks: does it compute...
Ellerbrock TM. Multilayer neural networks : learnability, network generation, and network simplifica...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
In this paper we discuss training of three-layer neural network classifiers by solving inequalities....
We demonstrate that the problem of training neural networks with small (average) squared error is co...