We consider the problem of learning in multilayer feed-forward networks of linear threshold units. We show that the Vapnik-Chervonenkis dimension of the class of functions that can be computed by a two-layer threshold network with real inputs is at least proportional to the number of weights in the network. This result also holds for a large class of twolayer networks with binary inputs, and a large class of three-layer networks with real inputs. In Valiant's probably approximately correct learning framework, this implies that the number of examples necessary for learning in these networks is at least linear in the number of weights. This bound is within a log factor of the upper bound. 1 INTRODUCTION Neural networks have been widely...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
This paper shows that neural networks which use continuous activation functions have VC dimension at...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedfor...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
The Vapnik-Chervonenkis dimension VC-dimension(N) of a neural net N with n input nodes is defined as...
AbstractMost of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on...
It has been known for quite a while that the Vapnik-Chervonenkis dimension (VC-dimension) of a feedf...
This paper applies the theory of probably approximately correct (PAC) learning to multiple-output fe...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
The Vapnik-Chervonenkis dimension has proven to be of great use in the theoretical study of generali...
A product unit is a formal neuron that multiplies its input values instead of summing them. Further...
AbstractThis paper applies the theory of Probably Approximately Correct (PAC) learning to multiple o...
This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output fe...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
This paper shows that neural networks which use continuous activation functions have VC dimension at...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedfor...
Abstract: We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold...
The Vapnik-Chervonenkis dimension VC-dimension(N) of a neural net N with n input nodes is defined as...
AbstractMost of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on...
It has been known for quite a while that the Vapnik-Chervonenkis dimension (VC-dimension) of a feedf...
This paper applies the theory of probably approximately correct (PAC) learning to multiple-output fe...
AbstractWe deal with the problem of efficient learning of feedforward neural networks. First, we con...
The Vapnik-Chervonenkis dimension has proven to be of great use in the theoretical study of generali...
A product unit is a formal neuron that multiplies its input values instead of summing them. Further...
AbstractThis paper applies the theory of Probably Approximately Correct (PAC) learning to multiple o...
This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output fe...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions...
This paper shows that neural networks which use continuous activation functions have VC dimension at...