Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs. We show that it is NP-complete to decide whether there exist weights and thresholds for this network so that it produces output consistent with a given set of training examples. We extend the result to other simple networks. We also present a network for which training is hard but where switching to a more powerful representation makes training easier. These results suggest that those looking for perfect training algorithms cannot escape inherent computational diculties just by considering only simple or very regular networks. They also suggest the importance, given a training problem, of nding an appropriate netwo...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Linear threshold elements are the basic building blocks of artificial neural networks. A linear thre...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
Ellerbrock TM. Multilayer neural networks : learnability, network generation, and network simplifica...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
We consider the computational complexity of learning by neural nets. We are inter- ested in how hard...
This paper proves that the task of computing near-optimal weights for sigmoidal nodes under the L 1 ...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Linear threshold elements are the basic building blocks of artificial neural networks. A linear thre...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
Ellerbrock TM. Multilayer neural networks : learnability, network generation, and network simplifica...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
We consider the computational complexity of learning by neural nets. We are inter- ested in how hard...
This paper proves that the task of computing near-optimal weights for sigmoidal nodes under the L 1 ...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We present a novel training algorithm for a feed forward neural network with a single hidden layer o...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Linear threshold elements are the basic building blocks of artificial neural networks. A linear thre...