Abstract The optimization of an information criterion in a variable selection procedure leads to an additional bias, which can be substantial in sparse, high-dimensional data. The bias can be compensated by applying shrinkage while estimating within the selected models. This paper presents modified information criteria for use in variable selection and estimation without shrinkage. The analysis motivating the modified criteria follows two routes. The first, explored for signal-plus-noise observations only, goes by comparison of estimators with and without shrinkage. The second, discussed for general regression models, describes the optimization or selection bias as a double-sided effect, named a mirror effect in the paper: among the numerou...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
<div><p>The adaptive Lasso is a commonly applied penalty for variable selection in regression modeli...
Regularized m-estimators are widely used due to their ability of recovering a low-dimensional model ...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
In sparse high-dimensional data, the selection of a model can lead to an overestimation of the numbe...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
In high-dimensional data settings where p » n, many penalized regularization approaches were studied...
The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often...
Penalization methods have been shown to yield both consistent variable selection and oracle paramete...
Penalization methods have been shown to yield both consistent variable selection and oracle paramete...
AbstractAn exhaustive search as required for traditional variable selection methods is impractical i...
AbstractAn exhaustive search as required for traditional variable selection methods is impractical i...
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrin...
We congratulate Professors Fan and Lv for a thought-provoking paper, which provides us deep understa...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
<div><p>The adaptive Lasso is a commonly applied penalty for variable selection in regression modeli...
Regularized m-estimators are widely used due to their ability of recovering a low-dimensional model ...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
In sparse high-dimensional data, the selection of a model can lead to an overestimation of the numbe...
Contemporary statistical research frequently deals with problems involving a diverging number of par...
In high-dimensional data settings where p » n, many penalized regularization approaches were studied...
The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often...
Penalization methods have been shown to yield both consistent variable selection and oracle paramete...
Penalization methods have been shown to yield both consistent variable selection and oracle paramete...
AbstractAn exhaustive search as required for traditional variable selection methods is impractical i...
AbstractAn exhaustive search as required for traditional variable selection methods is impractical i...
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrin...
We congratulate Professors Fan and Lv for a thought-provoking paper, which provides us deep understa...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
<div><p>The adaptive Lasso is a commonly applied penalty for variable selection in regression modeli...
Regularized m-estimators are widely used due to their ability of recovering a low-dimensional model ...