We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully connected neural network to fit a given set of data points. This problem is known as empirical risk minimization in the machine learning community. We show that the problem is $\exists\mathbb{R}$-complete. This complexity class can be defined as the set of algorithmic problems that are polynomial-time equivalent to finding real roots of a polynomial with integer coefficients. Furthermore, we show that arbitrary algebraic numbers are required as weights to be able to train some instances to optimality, even if all data points are rational. Our results hold even if the following restrictions are all added simultaneously. $\bullet$ There are e...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Understanding the computational complexity of training simple neural networks with rectified linear ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We consider the computational complexity of learning by neural nets. We are inter- ested in how hard...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
We investigate the complexity of the reachability problem for (deep) neuralnetworks: does it compute...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
The computational power of neural networks depends on properties of the real numbers used as weights...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Understanding the computational complexity of training simple neural networks with rectified linear ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weig...
We consider the computational complexity of learning by neural nets. We are inter- ested in how hard...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We demonstrate that the problem of training neural networks with small (average) squared error is co...
We investigate the complexity of the reachability problem for (deep) neuralnetworks: does it compute...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
The computational power of neural networks depends on properties of the real numbers used as weights...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
Understanding the computational complexity of training simple neural networks with rectified linear ...