We establish a mistake bound for an ensemble method for classification based on maximizing the entropy of voting weights subject to margin constraints. The bound is the same as a general bound proved for the Weighted Majority Algorithm, and similar to bounds for other variants of Winnow. We prove a more refined bound that leads to a nearly optimal algorithm for learning disjunctions, again, based on the maximum entropy principle. We describe a simplification of the on-line maximum entropy method in which, after each iteration, the margin constraints are replaced with a single linear inequality. The simplified algorithm, which takes a similar form to Winnow, achieves the same mistake bounds.
International audienceIn this paper we present two transductive bounds on the risk of the majority v...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
In this work, we propose a new approach of deriving the bounds between entropy and error from a join...
In this paper, we propose a transductive bound over the risk of the majority vote classifier learned...
International audienceIn this paper, we propose a transductive bound over the risk of the majority v...
Ensemble margin Classification confidence a b s t r a c t Ensemble learning has attracted considerab...
We present an improved bound on the difference between training and test errors for voting classifie...
We study the generalisation properties of majority voting on finite ensembles of classifiers, provin...
[[abstract]]Combining multiple classifier systems (MCS’) has been shown to outperform single classif...
International audienceThis paper generalizes a pivotal result from the PAC-Bayesian literature —the ...
When random forests are used for binary classification, an ensemble of $t=1,2,\dots$ random...
Majority voting is often employed as a tool to increase the robustness of data-driven decisions and ...
A fundamental open problem in computational learning theory is whether there is an attribute e#cien...
We give an adversary strategy that forces the Perceptron algorithm to make \Omega\Gamma kN) mistakes...
AbstractIt is easy to design on-line learning algorithms for learning k out of n variable monotone d...
International audienceIn this paper we present two transductive bounds on the risk of the majority v...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
In this work, we propose a new approach of deriving the bounds between entropy and error from a join...
In this paper, we propose a transductive bound over the risk of the majority vote classifier learned...
International audienceIn this paper, we propose a transductive bound over the risk of the majority v...
Ensemble margin Classification confidence a b s t r a c t Ensemble learning has attracted considerab...
We present an improved bound on the difference between training and test errors for voting classifie...
We study the generalisation properties of majority voting on finite ensembles of classifiers, provin...
[[abstract]]Combining multiple classifier systems (MCS’) has been shown to outperform single classif...
International audienceThis paper generalizes a pivotal result from the PAC-Bayesian literature —the ...
When random forests are used for binary classification, an ensemble of $t=1,2,\dots$ random...
Majority voting is often employed as a tool to increase the robustness of data-driven decisions and ...
A fundamental open problem in computational learning theory is whether there is an attribute e#cien...
We give an adversary strategy that forces the Perceptron algorithm to make \Omega\Gamma kN) mistakes...
AbstractIt is easy to design on-line learning algorithms for learning k out of n variable monotone d...
International audienceIn this paper we present two transductive bounds on the risk of the majority v...
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or ...
In this work, we propose a new approach of deriving the bounds between entropy and error from a join...