36 pagesIt has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, i.e., the rates faster than $n^{-1/2}$. The works on this subject suggested the following two conjectures: (i) the best achievable fast rate is of the order $n^{-1}$, and (ii) the plug-in classifiers generally converge slower than the classifiers based on empirical risk minimization. We show that both conjectures are not correct. In particular, we construct plug-in classifiers that can achieve not only the fast, but also the {\it super-fast} rates, i.e., the rates faster than $n^{-1}$. We establish minimax lower bounds showing that the obtained rates cannot be improved
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
30 pages; To appear in the Annals of StatisticsWe consider the problem of adaptation to the margin a...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
36 pagesIt has been recently shown that, under the margin (or low noise) assumption, there exist cla...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
International audienceWhile it is now well-known in the standard binary classification setup, that, ...
The speed with which a learning algorithm converges as it is presented with more data is a central p...
We present new excess risk bounds for general unbounded loss functions including log loss and square...
In the context of density level set estimation, we study the convergence of general plug-in methods ...
A recent line of works, initiated by Russo and Xu, has shown that the generalization error of a lear...
The effect of measurement errors in discriminant analysis is investigated. Given observations $Z=X+\...
The effect of errors in variables in empirical minimization is investigated. Given a loss $l$ and a ...
The speed with which a learning algorithm converges as it is presented with more data is a central p...
15 pagesLet $\cF$ be a set of $M$ classification procedures with values in $[-1,1]$. Given a loss fu...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
30 pages; To appear in the Annals of StatisticsWe consider the problem of adaptation to the margin a...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
36 pagesIt has been recently shown that, under the margin (or low noise) assumption, there exist cla...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
International audienceWhile it is now well-known in the standard binary classification setup, that, ...
The speed with which a learning algorithm converges as it is presented with more data is a central p...
We present new excess risk bounds for general unbounded loss functions including log loss and square...
In the context of density level set estimation, we study the convergence of general plug-in methods ...
A recent line of works, initiated by Russo and Xu, has shown that the generalization error of a lear...
The effect of measurement errors in discriminant analysis is investigated. Given observations $Z=X+\...
The effect of errors in variables in empirical minimization is investigated. Given a loss $l$ and a ...
The speed with which a learning algorithm converges as it is presented with more data is a central p...
15 pagesLet $\cF$ be a set of $M$ classification procedures with values in $[-1,1]$. Given a loss fu...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
30 pages; To appear in the Annals of StatisticsWe consider the problem of adaptation to the margin a...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...