Recent theoretical results have shown that the generalization performance of thresholded convex combinations of base classifiers is greatly improved if the underlying convex combination has large margins on the training data (i.e., correct examples are classified well away from the decision boundary). Neural network algorithms and AdaBoost have been shown to implicitly maximize margins, thus providing some theoretical justification for their remarkably good generalization performance. In this paper we are concerned with maximizing the margin explicitly. In particular, we prove a theorem bounding the generalization performance of convex combinations in terms of general cost functions of the margin, in contrast to previous results, which were...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
Much attention has been paid to the theoretical explanation of the empirical success of AdaBoost. Th...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
0 0 1 Cumulative training margin distributions for AdaBoost versus our "Direct Optimization Of...
Generalization bounds depending on the margin of a classifier are a relatively recent development. T...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
AbstractWe derive an upper bound on the generalization error of classifiers which can be represented...
AbstractWe derive an upper bound on the generalization error of classifiers which can be represented...
Abstract—Boosting is of great interest recently in the machine learning community because of the imp...
A number of results have bounded generalization of a classier in terms of its margin on the training...
Much attention has been paid to the theoretical explanation of the empirical success of AdaBoost. Th...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
Much attention has been paid to the theoretical explanation of the empirical success of AdaBoost. Th...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
0 0 1 Cumulative training margin distributions for AdaBoost versus our "Direct Optimization Of...
Generalization bounds depending on the margin of a classifier are a relatively recent development. T...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
When dealing with two-class problems the combination of several dichotomizers is an established tech...
AbstractWe derive an upper bound on the generalization error of classifiers which can be represented...
AbstractWe derive an upper bound on the generalization error of classifiers which can be represented...
Abstract—Boosting is of great interest recently in the machine learning community because of the imp...
A number of results have bounded generalization of a classier in terms of its margin on the training...
Much attention has been paid to the theoretical explanation of the empirical success of AdaBoost. Th...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
Much attention has been paid to the theoretical explanation of the empirical success of AdaBoost. Th...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...