We congratulate Professors Fan and Lv for a thought-provoking paper, which provides us deep understanding about variable selection in an ultrahigh dimensional setup. We would like to supply our comments as follows. The important work of Breiman (1996) and Tibshirani (1996) demonstrated clearly that shrinkage estimation is a promising solution for variable selection. The first paper on the asymptotic results of the lasso appeared in Knight and Fu (2000). However, the important question regarding whether those shrinkage methods are consistent in model selection (Shao, 1997) was not clear. In a seminal paper, Fan and Li (2001) developed SCAD and, more importantly, introduced a general theoretical framework to understand the asymptotic behavior...
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136472/1/asmb2216_am.pdfhttps://deepbl...
Penalized regression methods are important when the number p of covariates exceeds the number of sam...
Given n noisy samples with p dimensions, where n ≪ p, we show that the multi-step thresholding proce...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
<div><p>The adaptive Lasso is a commonly applied penalty for variable selection in regression modeli...
In sparse high-dimensional data, the selection of a model can lead to an overestimation of the numbe...
Asymptotic behavior of the tuning parameter selection in the standard cross-validation methods is in...
Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle pr...
In high-dimensional data settings where p » n, many penalized regularization approaches were studied...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
AbstractAn exhaustive search as required for traditional variable selection methods is impractical i...
The performances of penalized least squares approaches profoundly depend on the selection of the tun...
One fundamental ingredient of our work is to formally split the signals into strong and weak ones. T...
Abstract The optimization of an information criterion in a variable selection procedure leads to an ...
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136472/1/asmb2216_am.pdfhttps://deepbl...
Penalized regression methods are important when the number p of covariates exceeds the number of sam...
Given n noisy samples with p dimensions, where n ≪ p, we show that the multi-step thresholding proce...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
<div><p>The adaptive Lasso is a commonly applied penalty for variable selection in regression modeli...
In sparse high-dimensional data, the selection of a model can lead to an overestimation of the numbe...
Asymptotic behavior of the tuning parameter selection in the standard cross-validation methods is in...
Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle pr...
In high-dimensional data settings where p » n, many penalized regularization approaches were studied...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
AbstractAn exhaustive search as required for traditional variable selection methods is impractical i...
The performances of penalized least squares approaches profoundly depend on the selection of the tun...
One fundamental ingredient of our work is to formally split the signals into strong and weak ones. T...
Abstract The optimization of an information criterion in a variable selection procedure leads to an ...
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136472/1/asmb2216_am.pdfhttps://deepbl...
Penalized regression methods are important when the number p of covariates exceeds the number of sam...
Given n noisy samples with p dimensions, where n ≪ p, we show that the multi-step thresholding proce...