The composite quantile regression (CQR) was introduced by Zou and Yuan [Ann. Statist. 36 (2008) 1108--1126] as a robust regression method for linear models with heavy-tailed errors while achieving high efficiency. Its penalized counterpart for high-dimensional sparse models was recently studied in Gu and Zou [IEEE Trans. Inf. Theory 66 (2020) 7132--7154], along with a specialized optimization algorithm based on the alternating direct method of multipliers (ADMM). Compared to the various first-order algorithms for penalized least squares, ADMM-based algorithms are not well-adapted to large-scale problems. To overcome this computational hardness, in this paper we employ a convolution-smoothed technique to CQR, complemented with iteratively re...
We introduce the notion of self-concordant smoothing for minimizing the sum of two convex functions:...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
As a prevalent distributed learning paradigm, Federated Learning (FL) trains a global model on a mas...
University of Minnesota Ph.D. dissertation. June 2017. Major: Statistics. Advisor: Hui Zou. 1 comput...
Data subject to heavy-tailed errors are commonly encountered in various scientific fields. To addres...
High-dimensional data can often display heterogeneity due to heteroscedastic variance or inhomogeneo...
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regu...
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regu...
We consider median regression and, more generally, a possibly infinite collection of quantile regres...
Data subject to heavy-tailed errors are commonly encountered in various scientific fields, es-pecial...
High-dimensional data have commonly emerged in diverse fields, such as economics, finance, genetics,...
Ultra-high dimensional data often display heterogeneity due to either heteroscedastic variance or ot...
The composite quantile estimator is a robust and efficient alternative to the least-squares estimato...
We consider both $\ell _{0}$-penalized and $\ell _{0}$-constrained quantile regression estimators. F...
The era of machine learning features large datasets that have high dimension of features. This leads...
We introduce the notion of self-concordant smoothing for minimizing the sum of two convex functions:...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
As a prevalent distributed learning paradigm, Federated Learning (FL) trains a global model on a mas...
University of Minnesota Ph.D. dissertation. June 2017. Major: Statistics. Advisor: Hui Zou. 1 comput...
Data subject to heavy-tailed errors are commonly encountered in various scientific fields. To addres...
High-dimensional data can often display heterogeneity due to heteroscedastic variance or inhomogeneo...
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regu...
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regu...
We consider median regression and, more generally, a possibly infinite collection of quantile regres...
Data subject to heavy-tailed errors are commonly encountered in various scientific fields, es-pecial...
High-dimensional data have commonly emerged in diverse fields, such as economics, finance, genetics,...
Ultra-high dimensional data often display heterogeneity due to either heteroscedastic variance or ot...
The composite quantile estimator is a robust and efficient alternative to the least-squares estimato...
We consider both $\ell _{0}$-penalized and $\ell _{0}$-constrained quantile regression estimators. F...
The era of machine learning features large datasets that have high dimension of features. This leads...
We introduce the notion of self-concordant smoothing for minimizing the sum of two convex functions:...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
As a prevalent distributed learning paradigm, Federated Learning (FL) trains a global model on a mas...