In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level. The canonical pivotal es-timator is the square-root Lasso, formulated along with its derivatives as a "non-smooth + non-smooth" optimization problem. Modern techniques to solve these include smoothing the datafitting term, to benefit from fast efficient proximal algorithms. In this work we show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estima-tors. Thanks to our theoretical analysis, we provide some guidelines on how to set the smoothing hyperparameter, and illustrate on synthetic data the interest of such guidelines
We derive $l_{\infty}$ convergence rate simultaneously for Lasso and Dantzig estimators in a high-di...
Sparsity promoting regularization is an important technique for signal reconstruction and several ot...
We propose a Bayesian framework for learning the optimal regularization parameter in the L1-norm pen...
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regul...
We propose a pivotal method for estimating high-dimensional sparse linear regression models, where t...
SUMMARY We propose a pivotal method for estimating high-dimensional sparse linear regression models,...
We propose a self-tuning √Lasso method that simultaneously resolves three important practical proble...
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression...
We consider the least-square linear regression problem with regularization by the $\ell^1$-norm, a p...
In this paper, we study sparse group Lasso for high-dimensional double sparse linear regression, whe...
We consider the linear regression problem. We propose the S-Lasso procedure to estimate the unknown ...
Sparsity promoting norms are frequently used in high dimensional regression. A limitation of such La...
International audienceSparsity promoting norms are frequently used in high dimensional regression. A...
Non-smooth regularized convex optimization procedures have emerged as a powerful tool to recover str...
International audienceIn high dimensional settings, sparse structures are crucial for efficiency, bo...
We derive $l_{\infty}$ convergence rate simultaneously for Lasso and Dantzig estimators in a high-di...
Sparsity promoting regularization is an important technique for signal reconstruction and several ot...
We propose a Bayesian framework for learning the optimal regularization parameter in the L1-norm pen...
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regul...
We propose a pivotal method for estimating high-dimensional sparse linear regression models, where t...
SUMMARY We propose a pivotal method for estimating high-dimensional sparse linear regression models,...
We propose a self-tuning √Lasso method that simultaneously resolves three important practical proble...
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression...
We consider the least-square linear regression problem with regularization by the $\ell^1$-norm, a p...
In this paper, we study sparse group Lasso for high-dimensional double sparse linear regression, whe...
We consider the linear regression problem. We propose the S-Lasso procedure to estimate the unknown ...
Sparsity promoting norms are frequently used in high dimensional regression. A limitation of such La...
International audienceSparsity promoting norms are frequently used in high dimensional regression. A...
Non-smooth regularized convex optimization procedures have emerged as a powerful tool to recover str...
International audienceIn high dimensional settings, sparse structures are crucial for efficiency, bo...
We derive $l_{\infty}$ convergence rate simultaneously for Lasso and Dantzig estimators in a high-di...
Sparsity promoting regularization is an important technique for signal reconstruction and several ot...
We propose a Bayesian framework for learning the optimal regularization parameter in the L1-norm pen...