\Ve describe a series of careful llumerical experiments which measure the average generalization capability of neural networks trained on a variety of simple functions. These experiments are designed to test whether average generalization performance can surpass the worst-case bounds obtained from formal learning theory using the Vapnik-Chervonenkis dimension (Blumer et al., 1989). We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 11m result of the bound. In other cases, we do find the 11m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to prefactor c...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
Two learning algorithms, Neural Nets and Function Decomposition, are tested on a set of real-world p...
The Vapnik-Chervonenkis dimension has proven to be of great use in the theoretical study of generali...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
This thesis presents a new theory of generalization in neural network types of learning machines. Th...
In a changing environment, forgetting old samples is an effective method to improve the adaptability...
We present a unified framework for a number of different ways of failing to generalize properly. Du...
We present a unified framework for a number of different ways of failing to generalize properly. Du...
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorit...
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorit...
We derive an improvement to the Vapnik-Chervonenkis bounds on generalization error which applies to ...
For classes of concepts defined by certain classes of analytic functions depending on n parameters,...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
Two learning algorithms, Neural Nets and Function Decomposition, are tested on a set of real-world p...
The Vapnik-Chervonenkis dimension has proven to be of great use in the theoretical study of generali...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
This thesis presents a new theory of generalization in neural network types of learning machines. Th...
In a changing environment, forgetting old samples is an effective method to improve the adaptability...
We present a unified framework for a number of different ways of failing to generalize properly. Du...
We present a unified framework for a number of different ways of failing to generalize properly. Du...
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorit...
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorit...
We derive an improvement to the Vapnik-Chervonenkis bounds on generalization error which applies to ...
For classes of concepts defined by certain classes of analytic functions depending on n parameters,...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
The Vapnik-Chervonenkis dimension (VC-dim) characterizes the sample learning complexity of a classif...
Two learning algorithms, Neural Nets and Function Decomposition, are tested on a set of real-world p...