We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}
We study sample-based estimates of the expectation of the function produced by the empirical minimiz...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
We investigate to which extent one can recover class probabilities within the empirical risk minimiz...
We study some stability properties of algorithms which minimize (or almost-minimize) empirical error...
We study some stability properties of algorithms which minimize (or almost-minimize) empirical error...
We present sharp bounds on the risk of the empirical minimization algorithm under mild assumptions o...
We present sharp bounds on the risk of the empirical minimization algorithm under mild assumptions o...
We present an argument based on the multidimensional and the uniform central limit theorems, proving...
In this correspondence, we present a simple argument that proves that under mild geometric assumptio...
We study the interaction between input distributions, learning algorithms and finite sample sizes in...
We study sample-based estimates of the expectation of the function produced by the empirical minimiz...
We investigate the behavior of the empirical minimization algorithm using various methods. We first ...
We study the interaction between input distributions, learning algo-rithms, and finite sample sizes ...
Abstract The generalization ability of minimizers of the empirical risk in the context of binary cla...
Empirical risk minimization offers well-known learning guarantees when training and test data come f...
We study sample-based estimates of the expectation of the function produced by the empirical minimiz...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
We investigate to which extent one can recover class probabilities within the empirical risk minimiz...
We study some stability properties of algorithms which minimize (or almost-minimize) empirical error...
We study some stability properties of algorithms which minimize (or almost-minimize) empirical error...
We present sharp bounds on the risk of the empirical minimization algorithm under mild assumptions o...
We present sharp bounds on the risk of the empirical minimization algorithm under mild assumptions o...
We present an argument based on the multidimensional and the uniform central limit theorems, proving...
In this correspondence, we present a simple argument that proves that under mild geometric assumptio...
We study the interaction between input distributions, learning algorithms and finite sample sizes in...
We study sample-based estimates of the expectation of the function produced by the empirical minimiz...
We investigate the behavior of the empirical minimization algorithm using various methods. We first ...
We study the interaction between input distributions, learning algo-rithms, and finite sample sizes ...
Abstract The generalization ability of minimizers of the empirical risk in the context of binary cla...
Empirical risk minimization offers well-known learning guarantees when training and test data come f...
We study sample-based estimates of the expectation of the function produced by the empirical minimiz...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
We investigate to which extent one can recover class probabilities within the empirical risk minimiz...