We study the leave-one-out and generalization errors of voting combinations of learning machines. A special case considered is a variant of bagging. We analyze in detail combinations of kernel machines, such as support vector machines, and present theoretical estimates of their leave-one-out error. We also derive novel bounds on the stability of combinations of any classifiers. These bounds can be used to formally show that, for example, bagging increases the stability of unstable learning machines. We report experiments supporting the theoretical findings
The present paper provides a new generic strategy leading to non-asymptotic theoretical guarantees o...
We apply an analytical framework for the analysis of linearly combined classifiers to ensembles gene...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
Abstract. We study the leave-one-out and generalization errors of voting combinations of learning ma...
Abstract: In this paper we prove sanity-check bounds for the error of the leave-one-out crossvalidat...
Abstract. We present an algorithm for learning stable machines which is motivated by recent results ...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
In supervised learning, labeled data are provided as inputs and then learning is used to classify ne...
In supervised learning, labeled data are provided as inputs and then learning is used to classify ne...
Abstract Three estimates of the leave-one-out error for *-support vector (SV) machine binary classif...
In the literature, the predictive accuracy is often the primary criterion for evaluating a learning ...
We propose an algorithm to predict the leave-one-out (LOO) error for kernel based classifiers. To ac...
In supervised learning, labeled data are provided as inputs and then learning is used to classify ne...
The present paper provides a new generic strategy leading to non-asymptotic theoretical guarantees o...
We apply an analytical framework for the analysis of linearly combined classifiers to ensembles gene...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
Abstract. We study the leave-one-out and generalization errors of voting combinations of learning ma...
Abstract: In this paper we prove sanity-check bounds for the error of the leave-one-out crossvalidat...
Abstract. We present an algorithm for learning stable machines which is motivated by recent results ...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
In supervised learning, labeled data are provided as inputs and then learning is used to classify ne...
In supervised learning, labeled data are provided as inputs and then learning is used to classify ne...
Abstract Three estimates of the leave-one-out error for *-support vector (SV) machine binary classif...
In the literature, the predictive accuracy is often the primary criterion for evaluating a learning ...
We propose an algorithm to predict the leave-one-out (LOO) error for kernel based classifiers. To ac...
In supervised learning, labeled data are provided as inputs and then learning is used to classify ne...
The present paper provides a new generic strategy leading to non-asymptotic theoretical guarantees o...
We apply an analytical framework for the analysis of linearly combined classifiers to ensembles gene...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...