Regularized m-estimators are widely used due to their ability of recovering a low-dimensional model in high-dimensional scenarios. Some recent efforts on this subject focused on creating a unified framework for establishing oracle bounds, and deriving conditions for support recovery. Under this same framework, we propose a new Generalized Information Criteria (GIC) that takes into consideration the sparsity pattern one wishes to recover. We obtain non-asymptotic model selection bounds and sufficient conditions for model selection consistency of the GIC. Furthermore, we show that the GIC can also be used for selecting the regularization parameter within a regularized $m$-estimation framework, which allows practical use of the GIC for model s...
We present a data dependent generalization bound for a large class of regularized algorithms which i...
The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often...
A structured variable selection problem is considered in which the covariates, divided into predefin...
Regularized M-estimators are widely used in science, due to their ability to fit a simpler, low- dim...
Nonquadratic regularizers, in particular the l/sub 1/ norm regularizer can yield sparse solutions th...
Non-quadratic regularizers, in particular the l1 norm regularizer can yield sparse solutions that ge...
Non-quadratic regularizers, in particular the ℓ1 norm regularizer can yield sparse solutions that ge...
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrin...
Nowadays an increasing amount of data is available and we have to deal with models in high dimension...
The classical model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike i...
Nowadays an increasing amount of data is available and we have to deal with models in high dimension...
Model selection is an indispensable part of data analysis dealing very frequently with fitting and p...
Abstract. We consider Bayesian model selection in generalized linear models that are high-dimensiona...
We present a Group Lasso procedure for generalized linear models (GLMs) and we study the properties ...
Abstract The optimization of an information criterion in a variable selection procedure leads to an ...
We present a data dependent generalization bound for a large class of regularized algorithms which i...
The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often...
A structured variable selection problem is considered in which the covariates, divided into predefin...
Regularized M-estimators are widely used in science, due to their ability to fit a simpler, low- dim...
Nonquadratic regularizers, in particular the l/sub 1/ norm regularizer can yield sparse solutions th...
Non-quadratic regularizers, in particular the l1 norm regularizer can yield sparse solutions that ge...
Non-quadratic regularizers, in particular the ℓ1 norm regularizer can yield sparse solutions that ge...
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrin...
Nowadays an increasing amount of data is available and we have to deal with models in high dimension...
The classical model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike i...
Nowadays an increasing amount of data is available and we have to deal with models in high dimension...
Model selection is an indispensable part of data analysis dealing very frequently with fitting and p...
Abstract. We consider Bayesian model selection in generalized linear models that are high-dimensiona...
We present a Group Lasso procedure for generalized linear models (GLMs) and we study the properties ...
Abstract The optimization of an information criterion in a variable selection procedure leads to an ...
We present a data dependent generalization bound for a large class of regularized algorithms which i...
The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often...
A structured variable selection problem is considered in which the covariates, divided into predefin...