We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account. While these inequalities may potentially have broad applications in learning theory in general, we exemplify their significance by studying the transductive setting of learning theory. For which we provide the first excess risk bounds based on the localized complexity of the hypothesis class, which can yield fast rates of convergence also in the transductive learning setting. We give a preliminary analysis of the localized complexities for the prominent case of kernel classes
The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference...
Concentration inequalities deal with deviations of functions of independent random variables from th...
Concentration inequalities deal with deviations of functions of independent random variables from th...
We show two novel concentration inequalities for suprema of empirical processes when sampling withou...
International audienceWe show two novel concentration inequalities for suprema of empirical processe...
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we estab...
Inductive learning is based on inferring a general rule from a finite data set and using it to label...
Inductive learning is based on inferring a general rule from a finite data set and using it to labe...
Inductive learning is based on inferring a general rule from a finite data set and using it to labe...
We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex ...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
Abstract—In this paper, a mathematical theory of learning is proposed that has many parallels with i...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
We consider some problems in learning with respect to a fixed distribution. We introduce two new not...
The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference...
Concentration inequalities deal with deviations of functions of independent random variables from th...
Concentration inequalities deal with deviations of functions of independent random variables from th...
We show two novel concentration inequalities for suprema of empirical processes when sampling withou...
International audienceWe show two novel concentration inequalities for suprema of empirical processe...
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we estab...
Inductive learning is based on inferring a general rule from a finite data set and using it to label...
Inductive learning is based on inferring a general rule from a finite data set and using it to labe...
Inductive learning is based on inferring a general rule from a finite data set and using it to labe...
We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex ...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
Abstract—In this paper, a mathematical theory of learning is proposed that has many parallels with i...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
We consider some problems in learning with respect to a fixed distribution. We introduce two new not...
The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference...
Concentration inequalities deal with deviations of functions of independent random variables from th...
Concentration inequalities deal with deviations of functions of independent random variables from th...