We deal with computational issues of loading a fixed-architecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading networks which do not consist of the binary-threshold neurons, but rather utilize a particular continuous activation function, commonly used in the neural network literature. We observe that the loading problem is polynomial-time if the input dimension is constant. Otherwise, however, any possible learning algorithm based on particular fixed architectures faces severe computational barriers. Similar theorems have already been proved by Megiddo and by Blum and Rivest, to the case of binary-threshold networks only. Our theoretical results lend further justification t...
The computational power of neural networks depends on properties of the real numbers used as weights...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
This paper shows that neural networks which use continuous activation functions have VC dimension at...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
We survey some relationships between computational complexity and neural network theory. Here, only ...
AbstractWe formalize a notion of loading information into connectionist networks that characterizes ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
The computational power of neural networks depends on properties of the real numbers used as weights...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
This paper shows that neural networks which use continuous activation functions have VC dimension at...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
We survey some relationships between computational complexity and neural network theory. Here, only ...
AbstractWe formalize a notion of loading information into connectionist networks that characterizes ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
The computational power of neural networks depends on properties of the real numbers used as weights...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...