AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NP-hard to approximate the ratio within some constant relative error if architectures with varying input dimension, one hidden layer, and two hidden neurons are considered where the activation function in the hidden layer is the sigmoid function, and the situation of epsilon-separation is assumed, or the activation function is the semilinear function. For single hidden layer threshold networks with varying input dimension and n hidden neurons, approximation within a relative error depending on n is NP-hard even...
International audienceIt has been shown that, when used for pattern recognition with supervised lear...
tput) is very commonly used to approximate unknown mappings. If the output layer is linear, such a n...
We study the number of hidden layers required by a multilayer neural network with threshold units to...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
The back propagation algorithm caused a tremendous breakthrough in the application of multilayer per...
DasGupta B, Hammer B. On approximate learning by multi-layered feedforward circuits. Theoretical Com...
DasGupta B, Hammer B. On Approximate Learning by Multi-layered Feedforward Circuits. In: Arimura H, ...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
In this paper, we present a review of some recent works on approximation by feedforward neural netwo...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
AbstractIt is shown that the general approximation property of feed-forward multilayer perceptron ne...
International audienceIt has been shown that, when used for pattern recognition with supervised lear...
tput) is very commonly used to approximate unknown mappings. If the output layer is linear, such a n...
We study the number of hidden layers required by a multilayer neural network with threshold units to...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
The back propagation algorithm caused a tremendous breakthrough in the application of multilayer per...
DasGupta B, Hammer B. On approximate learning by multi-layered feedforward circuits. Theoretical Com...
DasGupta B, Hammer B. On Approximate Learning by Multi-layered Feedforward Circuits. In: Arimura H, ...
AbstractWe consider the problem of efficiently learning in two-layer neural networks. We investigate...
We consider the problem of learning in multilayer feed-forward networks of linear threshold units. W...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
The back-propagation learning algorithm for multi-layered neural networks, which is often successful...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
In this paper, we present a review of some recent works on approximation by feedforward neural netwo...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
AbstractIt is shown that the general approximation property of feed-forward multilayer perceptron ne...
International audienceIt has been shown that, when used for pattern recognition with supervised lear...
tput) is very commonly used to approximate unknown mappings. If the output layer is linear, such a n...
We study the number of hidden layers required by a multilayer neural network with threshold units to...