In this paper we analyse the effect of introducing a structure in the input distribution on the generalization ability of a simple perceptron. The simple case of two clusters of input data and a linearly separable rule is considered. We find that the generalization ability improves with the separation between the clusters, and is bounded from below by the result for the unstructured case, recovered as the separation between clusters vanishes. The asymptotic behaviour for large training sets, however, is the same for structured and unstructured input distributions. For small training sets, the dependence of the generalization error on the number of examples is observed to be non-monotonic for certain values of the model parameters.
We derive an improvement to the Vapnik-Chervonenkis bounds on generalization error which applies to ...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
In this work, we study how the selection of examples affects the learn-ing procedure in a boolean ne...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
Machine learning models are typically configured by minimizing the training error over a given train...
For decades research has pursued the ambitious goal of designing computer models that learn to solve...
This thesis presents a new theory of generalization in neural network types of learning machines. Th...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
Generalization is a central aspect of learning theory. Here, we propose a framework that explores an...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
We stress that the field of machine learning would benefit significantly if more work were focused o...
We derive an improvement to the Vapnik-Chervonenkis bounds on generalization error which applies to ...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
In this work, we study how the selection of examples affects the learn-ing procedure in a boolean ne...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
Machine learning models are typically configured by minimizing the training error over a given train...
For decades research has pursued the ambitious goal of designing computer models that learn to solve...
This thesis presents a new theory of generalization in neural network types of learning machines. Th...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
Generalization is a central aspect of learning theory. Here, we propose a framework that explores an...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
We stress that the field of machine learning would benefit significantly if more work were focused o...
We derive an improvement to the Vapnik-Chervonenkis bounds on generalization error which applies to ...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
In this work, we study how the selection of examples affects the learn-ing procedure in a boolean ne...