This paper investigates robust versions of the general empirical risk minimization algorithm, one of the core techniques underlying modern statistical methods. Success of the empirical risk minimization is based on the fact that for a "well-behaved" stochastic process $\left\{ f(X), \ f\in \mathcal F\right\}$ indexed by a class of functions $f\in \mathcal F$, averages $\frac{1}{N}\sum_{j=1}^N f(X_j)$ evaluated over a sample $X_1,\ldots,X_N$ of i.i.d. copies of $X$ provide good approximation to the expectations $\mathbb E f(X)$ uniformly over large classes $f\in \mathcal F$. However, this might no longer be true if the marginal distributions of the process are heavy-tailed or if the sample contains outliers. We propose a version of empirical...
We consider learning methods based on the regularization of a convex empirical risk by a squared Hil...
We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Ri...
The effect of errors in variables in empirical minimization is investigated. Given a loss $l$ and a ...
The purpose of this paper is to discuss empirical risk minimization when the losses are not necessar...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
We study the performance of empirical risk minimization on the $p$-norm linear regression problem fo...
58p.We consider the estimation of a bounded regression function with nonparametric heteroscedastic n...
We present an argument based on the multidimensional and the uniform central limit theorems, proving...
We present new excess risk bounds for general unbounded loss functions including log loss and square...
We study some stability properties of algorithms which minimize (or almost-minimize) empirical error...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
In statistics and learning theory, it is common to assume that samples are independently and identic...
We consider the random design regression with square loss. We propose a method that aggregates empir...
In statistics and learning theory, it is common to assume that samples are independently and identic...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
We consider learning methods based on the regularization of a convex empirical risk by a squared Hil...
We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Ri...
The effect of errors in variables in empirical minimization is investigated. Given a loss $l$ and a ...
The purpose of this paper is to discuss empirical risk minimization when the losses are not necessar...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
We study the performance of empirical risk minimization on the $p$-norm linear regression problem fo...
58p.We consider the estimation of a bounded regression function with nonparametric heteroscedastic n...
We present an argument based on the multidimensional and the uniform central limit theorems, proving...
We present new excess risk bounds for general unbounded loss functions including log loss and square...
We study some stability properties of algorithms which minimize (or almost-minimize) empirical error...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
In statistics and learning theory, it is common to assume that samples are independently and identic...
We consider the random design regression with square loss. We propose a method that aggregates empir...
In statistics and learning theory, it is common to assume that samples are independently and identic...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
We consider learning methods based on the regularization of a convex empirical risk by a squared Hil...
We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Ri...
The effect of errors in variables in empirical minimization is investigated. Given a loss $l$ and a ...