We present upper and lower bounds for the prediction error of the Lasso. For the case of random Gaussian design, we show that under mild conditions the prediction error of the Lasso is up to smaller order terms dominated by the prediction error of its noiseless counterpart. We then provide exact expressions for the prediction error of the latter, in terms of compatibility constants. Here, we assume the active components of the underlying regression function satisfy some “betamin" condition. For the case of fixed design, we provide upper and lower bounds, again in terms of compatibility constants. As an example, we give an up to a logarithmic term tight bound for the least squares estimator with total variation penalty.ISSN:1532-4435ISSN:153...
Performing statistical inference in high-dimensional models is an outstanding challenge. A ma-jor so...
In recent years, extensive research has focused on the $\ell_1$ penalized least squares (Lasso) esti...
In this paper, we investigate the degrees of freedom (df) of penalized l1 minimization (also known a...
The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that t...
The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that t...
The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that t...
In this paper, we investigate the degrees of freedom ($\dof$) of penalized $\ell_1$ minimization (al...
68 pages, 2 figuresThe Lasso is a popular regression method for high-dimensional problems in which t...
In regression settings where explanatory variables have very low correlations and where there are re...
Sparse regression is an efficient statistical modelling technique which is of major relevance for hi...
The Lasso achieves variance reduction and variable selection by solving an ℓ 1 -regularized least sq...
In this paper we investigate error bounds for convex loss functions for the Lasso in linear models, ...
revised versionA well-know drawback of l1-penalized estimators is the systematic shrinkage of the la...
We study how correlations in the design matrix influence Lasso prediction. First, we argue that the ...
The Lasso is an attractive regularisation method for high-dimensional regression. It combines variab...
Performing statistical inference in high-dimensional models is an outstanding challenge. A ma-jor so...
In recent years, extensive research has focused on the $\ell_1$ penalized least squares (Lasso) esti...
In this paper, we investigate the degrees of freedom (df) of penalized l1 minimization (also known a...
The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that t...
The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that t...
The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that t...
In this paper, we investigate the degrees of freedom ($\dof$) of penalized $\ell_1$ minimization (al...
68 pages, 2 figuresThe Lasso is a popular regression method for high-dimensional problems in which t...
In regression settings where explanatory variables have very low correlations and where there are re...
Sparse regression is an efficient statistical modelling technique which is of major relevance for hi...
The Lasso achieves variance reduction and variable selection by solving an ℓ 1 -regularized least sq...
In this paper we investigate error bounds for convex loss functions for the Lasso in linear models, ...
revised versionA well-know drawback of l1-penalized estimators is the systematic shrinkage of the la...
We study how correlations in the design matrix influence Lasso prediction. First, we argue that the ...
The Lasso is an attractive regularisation method for high-dimensional regression. It combines variab...
Performing statistical inference in high-dimensional models is an outstanding challenge. A ma-jor so...
In recent years, extensive research has focused on the $\ell_1$ penalized least squares (Lasso) esti...
In this paper, we investigate the degrees of freedom (df) of penalized l1 minimization (also known a...