We analyze the generalization ability of a simple perceptron acting on a structured input distribution for the simple case of two clusters of input data and a linearly separable rule. The generalization ability computed for three learning scenarios: maximal stability, Gibbs, and optimal learning, is found to improve with the separation between the clusters, and is bounded from below by the result for the unstructured case, recovered as the separation between clusters vanishes. The asymptotic behavior for large training sets is the same for structured and unstructured input distributions. For small training sets, the generalization error of the maximally stable perceptron exhibits a nonmonotonic dependence on the number of examples for certa...
For decades research has pursued the ambitious goal of designing computer models that learn to solve...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
We study on-line learning of a linearly separable rule with a simple perceptron. Training utilizes a...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
Machine learning models are typically configured by minimizing the training error over a given train...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...
haimCfiz.huji.ac.il The performance of on-line algorithms for learning dichotomies is studied. In on...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
We investigate the performance of distributed learning for large-scale linear regression where the m...
For decades research has pursued the ambitious goal of designing computer models that learn to solve...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
In this paper we analyse the effect of introducing a structure in the input distribution on the gene...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs ar...
We study on-line learning of a linearly separable rule with a simple perceptron. Training utilizes a...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
Machine learning models are typically configured by minimizing the training error over a given train...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...
haimCfiz.huji.ac.il The performance of on-line algorithms for learning dichotomies is studied. In on...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
We investigate the performance of distributed learning for large-scale linear regression where the m...
For decades research has pursued the ambitious goal of designing computer models that learn to solve...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...