We derive $l_{\infty}$ convergence rate simultaneously for Lasso and Dantzig estimators in a high-dimensional linear regression model under a mutual coherence assumption on the Gram matrix of the design and two different assumptions on the noise: Gaussian noise and general noise with finite variance. Then we prove that simultaneously the thresholded Lasso and Dantzig estimators with a proper choice of the threshold enjoy a sign concentration property provided that the non-zero components of the target vector are not too small
We consider the least-square linear regression problem with regularization by the l1-norm, a problem...
The Lasso is a method for high-dimensional regression, which is now commonly used when the number of...
We propose a generalized version of the Dantzig selector. We show that it satisfies sparsity oracle ...
During the last few years, a great deal attention has been focused on lasso and Dantzig selector in ...
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression...
We consider the estimation of regression coefficients in a high-dimensional linear model. A lower bo...
International audienceBasis Pursuit (BP), Basis Pursuit DeNoising (BPDN), and LASSO are popular meth...
We consider the least-square linear regression problem with regularization by the $\ell^1$-norm, a p...
In regression settings where explanatory variables have very low correlations and there are relative...
We study the distribution of hard-, soft-, and adaptive soft-thresholding estimators within a linear...
Transductive methods are useful in prediction problems when the training dataset is composed of a la...
In regression settings where explanatory variables have very low correlations and where there are re...
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regul...
In this paper we develop inference for high dimensional linear models, with serially correlated erro...
The Lasso is an attractive technique for regularization and variable selection for high-dimensional ...
We consider the least-square linear regression problem with regularization by the l1-norm, a problem...
The Lasso is a method for high-dimensional regression, which is now commonly used when the number of...
We propose a generalized version of the Dantzig selector. We show that it satisfies sparsity oracle ...
During the last few years, a great deal attention has been focused on lasso and Dantzig selector in ...
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression...
We consider the estimation of regression coefficients in a high-dimensional linear model. A lower bo...
International audienceBasis Pursuit (BP), Basis Pursuit DeNoising (BPDN), and LASSO are popular meth...
We consider the least-square linear regression problem with regularization by the $\ell^1$-norm, a p...
In regression settings where explanatory variables have very low correlations and there are relative...
We study the distribution of hard-, soft-, and adaptive soft-thresholding estimators within a linear...
Transductive methods are useful in prediction problems when the training dataset is composed of a la...
In regression settings where explanatory variables have very low correlations and where there are re...
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regul...
In this paper we develop inference for high dimensional linear models, with serially correlated erro...
The Lasso is an attractive technique for regularization and variable selection for high-dimensional ...
We consider the least-square linear regression problem with regularization by the l1-norm, a problem...
The Lasso is a method for high-dimensional regression, which is now commonly used when the number of...
We propose a generalized version of the Dantzig selector. We show that it satisfies sparsity oracle ...