This article considers penalized empirical loss minimization of convex loss functions with unknown target functions. Using the elastic net penalty, of which the Least Absolute Shrinkage and Selection Operator (Lasso) is a special case, we establish a finite sample oracle inequality which bounds the loss of our estimator from above with high probability. If the unknown target is linear, this inequality also provides an upper bound of the estimation error of the estimated parameter vector. Next, we use the non-asymptotic results to show that the excess loss of our estimator is asymptotically of the same order as that of the oracle. If the target is linear, we give sufficient conditions for consistency of the estimated parameter vector. We bri...
Abstract: This paper studies oracle properties of!1-penalized least squares in nonparametric regress...
We establish a general oracle inequality for clipped approximate minimizers of regularized empirical...
International audienceWe build penalized least-squares estimators using the slope heuristic and resa...
This article considers penalized empirical loss minimization of convex loss functions with unknown t...
In this paper we investigate error bounds for convex loss functions for the Lasso in linear models, ...
This paper considers the penalized least squares estimators with convex penalties or regularisation ...
International audienceWe show that empirical risk minimization procedures and regularized empirical ...
In this paper we investigate the impact of choosing di\ufb00erent loss functions from the viewpoint ...
In this letter, we investigate the impact of choosing different loss functions from the viewpoint of...
International audienceIn this paper, we consider a high-dimensional statistical estimation problem i...
AbstractWe study the distributions of the LASSO, SCAD, and thresholding estimators, in finite sample...
International audienceWe propose a general family of algorithms for regression estimation with quadr...
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dim...
We study the distributions of the LASSO, SCAD, and thresholding estimators, in finite samples and in...
The dissertation can be broadly classified into four projects. They are presented in four different ...
Abstract: This paper studies oracle properties of!1-penalized least squares in nonparametric regress...
We establish a general oracle inequality for clipped approximate minimizers of regularized empirical...
International audienceWe build penalized least-squares estimators using the slope heuristic and resa...
This article considers penalized empirical loss minimization of convex loss functions with unknown t...
In this paper we investigate error bounds for convex loss functions for the Lasso in linear models, ...
This paper considers the penalized least squares estimators with convex penalties or regularisation ...
International audienceWe show that empirical risk minimization procedures and regularized empirical ...
In this paper we investigate the impact of choosing di\ufb00erent loss functions from the viewpoint ...
In this letter, we investigate the impact of choosing different loss functions from the viewpoint of...
International audienceIn this paper, we consider a high-dimensional statistical estimation problem i...
AbstractWe study the distributions of the LASSO, SCAD, and thresholding estimators, in finite sample...
International audienceWe propose a general family of algorithms for regression estimation with quadr...
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dim...
We study the distributions of the LASSO, SCAD, and thresholding estimators, in finite samples and in...
The dissertation can be broadly classified into four projects. They are presented in four different ...
Abstract: This paper studies oracle properties of!1-penalized least squares in nonparametric regress...
We establish a general oracle inequality for clipped approximate minimizers of regularized empirical...
International audienceWe build penalized least-squares estimators using the slope heuristic and resa...