This paper relies on the entropy of a data-set (i.e., number-of-bits) to prove tight bounds on the size of neural networks solving a classification problem. First, based on a sequence of geometrical steps, the authors constructively compute an upper bound of O(mn) on the number-of-bits for a given data-set - here m is the number of examples and n is the number of dimensions (i.e., R{sup n}). This result is used further in a nonconstructive way to bound the size of neural networks which correctly classify that data-set
The choice of dictionaries of computational units suitable for efficient computation of binary class...
AbstractHow does the connectivity of a neural network (number of synapses per neuron) relate to the ...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...
In this paper the authors prove two new lower bounds for the number-of-bits required by neural netwo...
This paper presents a constructive approach to estimating the size of a neural network necessary to ...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
This paper addresses the relationship between the number of hidden layer nodes In a neural network, ...
We study the sample complexity of learning neural networks by providing new bounds on their Rademach...
Sample complexity results from computational learning theory, when applied to neural network learnin...
Neural networks (NNs) have been experimentally shown to be quite effective in many applications. Thi...
For classes of concepts defined by certain classes of analytic functions depending on n parameters,...
In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly...
This paper shows that neural networks which use continuous activation functions have VC dimension at...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
AbstractThis paper considers the classification capabilities of neural networks which incorporate a ...
The choice of dictionaries of computational units suitable for efficient computation of binary class...
AbstractHow does the connectivity of a neural network (number of synapses per neuron) relate to the ...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...
In this paper the authors prove two new lower bounds for the number-of-bits required by neural netwo...
This paper presents a constructive approach to estimating the size of a neural network necessary to ...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
This paper addresses the relationship between the number of hidden layer nodes In a neural network, ...
We study the sample complexity of learning neural networks by providing new bounds on their Rademach...
Sample complexity results from computational learning theory, when applied to neural network learnin...
Neural networks (NNs) have been experimentally shown to be quite effective in many applications. Thi...
For classes of concepts defined by certain classes of analytic functions depending on n parameters,...
In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly...
This paper shows that neural networks which use continuous activation functions have VC dimension at...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
AbstractThis paper considers the classification capabilities of neural networks which incorporate a ...
The choice of dictionaries of computational units suitable for efficient computation of binary class...
AbstractHow does the connectivity of a neural network (number of synapses per neuron) relate to the ...
This paper starts by overviewing results dealing with the approximation capabilities of neural netwo...