Anumber of results have bounded generalization of a classi er in terms of its margin on the training points. There has been some debate about whether the minimum margin is the best measure of the distribution of training set margin values with which to estimate the generalization. Freund and Schapire [8] have shown how a di erent function of the margin distribution can be used to bound the numberofmistakes of an on-line learning algorithm for a perceptron, as well as an expected error bound. We show that a slight generalization of their construction can be used to give a pac style bound on the tail of the distribution of the generalization errors that arise from a given sample size. Algorithms arising from the approach are related to those ...
We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mis...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
In many classification procedures, the classification function is obtained (or trained) by minimizi...
A number of results have bounded generalization of a classier in terms of its margin on the training...
A number of results have bounded generalization of a classier in terms of its margin on the training...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...
A number of results have bounded generalization of a classifier in terms of its margin on the traini...
Typical bounds on generalization of Support Vector Machines are based on the minimum distance betwee...
Generalization bounds depending on the margin of a classifier are a relatively recent development. T...
We present a bound on the generalisation error of linear classifiers in terms of a refined margin qu...
Recent theoretical results have shown that im-proved bounds on generalization error of clas-siers ca...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
We present distribution independent bounds on the generalization misclassification performance of a ...
We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mis...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
In many classification procedures, the classification function is obtained (or trained) by minimizi...
A number of results have bounded generalization of a classier in terms of its margin on the training...
A number of results have bounded generalization of a classier in terms of its margin on the training...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...
A number of results have bounded generalization of a classifier in terms of its margin on the traini...
Typical bounds on generalization of Support Vector Machines are based on the minimum distance betwee...
Generalization bounds depending on the margin of a classifier are a relatively recent development. T...
We present a bound on the generalisation error of linear classifiers in terms of a refined margin qu...
Recent theoretical results have shown that im-proved bounds on generalization error of clas-siers ca...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
We present distribution independent bounds on the generalization misclassification performance of a ...
We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mis...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
In many classification procedures, the classification function is obtained (or trained) by minimizi...