It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently us-ing SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization. In this paper we revisit the computational complexity of training neural networks from a modern perspective. We provide both positive and neg-ative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks.
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
There are many types of activity which are commonly known as ‘learning’. Here, we shall discuss a ma...
We survey some relationships between computational complexity and neural network theory. Here, only ...
Overwhelming theoretical and empirical evidence shows that mildly overparametrized neural networks -...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
. We survey some of the central results in the complexity theory of discrete neural networks, with ...
This paper discusses within the framework of computational learning theory the current state of know...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
There are many types of activity which are commonly known as ‘learning’. Here, we shall discuss a ma...
We survey some relationships between computational complexity and neural network theory. Here, only ...
Overwhelming theoretical and empirical evidence shows that mildly overparametrized neural networks -...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
. We survey some of the central results in the complexity theory of discrete neural networks, with ...
This paper discusses within the framework of computational learning theory the current state of know...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...