Editor: the editor This paper proposes a new robust regression interpretation of sparse penalties such as the elastic net and the group-lasso. Beyond providing a new viewpoint on these penalization schemes, our approach results in a unified optimization strategy. Our evaluation experi-ments demonstrate that this strategy, implemented on the elastic net, is computationally extremely efficient for small to medium size problems. Our accompanying software solves problems at machine precision in the time required to get a rough estimate with competing state-of-the-art algorithms
The least absolute shrinkage and selection operator (lasso) and ridge regression produce usually dif...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...
We consider a linear regression problem in a high dimensional setting where the number of covariates...
The main intention of the thesis is to present several types of penalization techniques and to apply...
We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with ...
Regularization technique has become a principled tool for statistics and machine learning research a...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
This paper presents the FOS algorithm as first outlined by Lim and Lederer in 2016, and describes it...
University of Minnesota Ph.D. dissertation.September 2015. Major: Statistics. Advisor: Hui Zou. 1 co...
In recent years, methods for sparse approximation have gained considerable attention and have been s...
Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unpreceden...
Recent work has focused on the problem of conducting linear regression when the number of covariates...
The Huber’s criterion is a useful method for robust regression. The adaptive least absolute shrinkag...
We derive a novel norm that corresponds to the tightest convex relaxation of spar-sity combined with...
The least absolute shrinkage and selection operator (lasso) and ridge regression produce usually dif...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...
We consider a linear regression problem in a high dimensional setting where the number of covariates...
The main intention of the thesis is to present several types of penalization techniques and to apply...
We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with ...
Regularization technique has become a principled tool for statistics and machine learning research a...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
This paper presents the FOS algorithm as first outlined by Lim and Lederer in 2016, and describes it...
University of Minnesota Ph.D. dissertation.September 2015. Major: Statistics. Advisor: Hui Zou. 1 co...
In recent years, methods for sparse approximation have gained considerable attention and have been s...
Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unpreceden...
Recent work has focused on the problem of conducting linear regression when the number of covariates...
The Huber’s criterion is a useful method for robust regression. The adaptive least absolute shrinkag...
We derive a novel norm that corresponds to the tightest convex relaxation of spar-sity combined with...
The least absolute shrinkage and selection operator (lasso) and ridge regression produce usually dif...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...