AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NP-hard to approximate the ratio within some constant relative error if architectures with varying input dimension, one hidden layer, and two hidden neurons are considered where the activation function in the hidden layer is the sigmoid function, and the situation of epsilon-separation is assumed, or the activation function is the semilinear function. For single hidden layer threshold networks with varying input dimension and n hidden neurons, approximation within a relative error depending on n is NP-hard even...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-lim...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
The back propagation algorithm caused a tremendous breakthrough in the application of multilayer per...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
AbstractIt is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknow...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
In this paper, we present a review of some recent works on approximation by feedforward neural netwo...
DasGupta B, Hammer B. Hardness of approximation of the loading problem for multi-layered feedforward...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-lim...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
The back propagation algorithm caused a tremendous breakthrough in the application of multilayer per...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
AbstractIt is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknow...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
In this paper, we present a review of some recent works on approximation by feedforward neural netwo...
DasGupta B, Hammer B. Hardness of approximation of the loading problem for multi-layered feedforward...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
This paper deals with learnability of concept classes defined by neural networks, showing the hardne...
We deal with computational issues of loading a fixed-architecture neural network with a set of posit...
We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-lim...