Nonquadratic regularizers, in particular the l/sub 1/ norm regularizer can yield sparse solutions that generalize well. In this work we propose the generalized subspace information criterion (GSIC) that allows to predict the generalization error for this useful family of regularizers. We show that under some technical assumptions GSIC is an asymptotically unbiased estimator of the generalization error. GSIC is demonstrated to have a good performance in experiments with the l/sub 1/ norm regularizer as we compare with the network information criterion (NIC) and cross- validation in relatively large sample cases. However in the small sample case, GSIC tends to fail to capture the optimal model due to its large variance. Therefore, also a bias...
In this work we are interested in the problems of supervised learning and variable selection when th...
The 1 norm regularized least square technique has been proposed as an efficient method to calculate ...
In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where...
Non-quadratic regularizers, in particular the l1 norm regularizer can yield sparse solutions that ge...
Non-quadratic regularizers, in particular the ℓ1 norm regularizer can yield sparse solutions that ge...
Regularized m-estimators are widely used due to their ability of recovering a low-dimensional model ...
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrin...
A central problem in learning is selection of an appropriate model. This is typically done by estima...
A well-known result by Stein (1956) shows that in particular situations, biased estimators can yield...
A central problem in learning is to select an appropriate model. Tl. is typically done by estimating...
Regularized M-estimators are widely used in science, due to their ability to fit a simpler, low- dim...
The classical model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike i...
The l(1) norm regularized least square technique has been proposed as an efficient method to calcula...
In sparse high-dimensional data, the selection of a model can lead to an overestimation of the numbe...
In this work we are interested in the problems of supervised learning and variable selection when th...
In this work we are interested in the problems of supervised learning and variable selection when th...
The 1 norm regularized least square technique has been proposed as an efficient method to calculate ...
In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where...
Non-quadratic regularizers, in particular the l1 norm regularizer can yield sparse solutions that ge...
Non-quadratic regularizers, in particular the ℓ1 norm regularizer can yield sparse solutions that ge...
Regularized m-estimators are widely used due to their ability of recovering a low-dimensional model ...
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrin...
A central problem in learning is selection of an appropriate model. This is typically done by estima...
A well-known result by Stein (1956) shows that in particular situations, biased estimators can yield...
A central problem in learning is to select an appropriate model. Tl. is typically done by estimating...
Regularized M-estimators are widely used in science, due to their ability to fit a simpler, low- dim...
The classical model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike i...
The l(1) norm regularized least square technique has been proposed as an efficient method to calcula...
In sparse high-dimensional data, the selection of a model can lead to an overestimation of the numbe...
In this work we are interested in the problems of supervised learning and variable selection when th...
In this work we are interested in the problems of supervised learning and variable selection when th...
The 1 norm regularized least square technique has been proposed as an efficient method to calculate ...
In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where...