This paper reports on a family of computationally practical classifiers that converge to the Bayes error at near-minimax optimal rates for a variety of distributions. The classifiers are based on dyadic classification trees (DCTs), which involve adaptively pruned partitions of the feature space. A key aspect of DCTs is their spatial adaptivity, which enables local (rather than global) fitting of the decision boundary. Our risk analysis involves a spatial decomposition of the usual concentration inequalities, leading to a spatially adaptive, data-dependent pruning criterion. For any distribution on ¢¡¤£¦¥¨ § whose Bayes decision boundary behaves locally like a Lipschitz smooth function, we show that the DCT error converges to the Bayes erro...
We derive a new asymptotic expansion for the global excess risk of a local-k-nearest neighbour class...
Risk bounds for Classification and Regression Trees (CART, Breiman et. al. 1984) classifiers are obt...
We study how closely the optimal Bayes error rate can be approximately reached using a classificatio...
This paper reports on a family of computationally practical classifiers that converge to the Bayes e...
We present an algorithm for exact Bayes optimal classification from a hypothesis space of decision t...
We propose a new nonparametric learning method based on multivariate dyadic regression trees (MDRTs)...
In this paper, we propose an adaptive kNN method for classification, in which different k are select...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, an...
Margin adaptive risk bounds for Classification and Regression Trees (CART, Breiman et. al. 1984) cla...
Classifiers can be either linear means Naive Bayes classifier or non-linear means decision trees.In ...
A complexity based pruning procedure for classification trees is described, and bounds on its finite...
A regularized boosting method is introduced, for which regularization is obtained through a penaliza...
International audienceThis paper proposes a novel method to adapt the block-sparsity structure to th...
<p>Proposed decision tree classification algorithm based on post-pruning with Bayes minimum risk.</p
We derive a new asymptotic expansion for the global excess risk of a local-k-nearest neighbour class...
Risk bounds for Classification and Regression Trees (CART, Breiman et. al. 1984) classifiers are obt...
We study how closely the optimal Bayes error rate can be approximately reached using a classificatio...
This paper reports on a family of computationally practical classifiers that converge to the Bayes e...
We present an algorithm for exact Bayes optimal classification from a hypothesis space of decision t...
We propose a new nonparametric learning method based on multivariate dyadic regression trees (MDRTs)...
In this paper, we propose an adaptive kNN method for classification, in which different k are select...
We construct a classifier which attains the rate of convergence $\log n/n$ under sparsity and margin...
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, an...
Margin adaptive risk bounds for Classification and Regression Trees (CART, Breiman et. al. 1984) cla...
Classifiers can be either linear means Naive Bayes classifier or non-linear means decision trees.In ...
A complexity based pruning procedure for classification trees is described, and bounds on its finite...
A regularized boosting method is introduced, for which regularization is obtained through a penaliza...
International audienceThis paper proposes a novel method to adapt the block-sparsity structure to th...
<p>Proposed decision tree classification algorithm based on post-pruning with Bayes minimum risk.</p
We derive a new asymptotic expansion for the global excess risk of a local-k-nearest neighbour class...
Risk bounds for Classification and Regression Trees (CART, Breiman et. al. 1984) classifiers are obt...
We study how closely the optimal Bayes error rate can be approximately reached using a classificatio...