In this paper we consider a regularization approach to variable selection when the regression function depends nonlinearly on a few input variables. The proposed method is based on a regularized least square estimator penalizing large values of the partial derivatives. An efficient iterative procedure is proposed to solve the underlying variational problem, and its convergence is proved. The empirical properties of the obtained estimator are tested both for prediction and variable selection. The algorithm compares favorably to more standard ridge regression and L1 regularization schemes
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
Abstract: High dimensional data are nowadays encountered in various branches of science. Variable se...
... In this article, penalized likelihood approaches are proposed to handle these kinds of problems....
We investigate structured sparsity methods for variable selection in regression problems where the t...
Applying nonparametric variable selection criteria in nonlinear regression models generally requires...
In this work we are interested in the problems of supervised learning and variable selection when th...
Copyright © 2014 Ge-Jin Chu et al. This is an open access article distributed under the Creative Com...
The Global COE Program Mathematics-for-Industry Education & Research HubグローバルCOEプログラム「マス・フォア・インダストリ教...
In this work we are interested in the problems of supervised learning and variable selection when th...
The aim of variable selection is the identification of the most important predictors that define the...
The lasso algorithm for variable selection in linear models, intro- duced by Tibshirani, works by im...
MI: Global COE Program Education-and-Research Hub for Mathematics-for-IndustryグローバルCOEプログラム「マス・フォア・イ...
We investigate the problem of model selection for learning algorithms depending on a continuous para...
This diploma thesis focuses on regularization and variable selection in regres- sion models. Basics ...
International audienceAdditive varying coefficient models are a natural extension of multiple linear...
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
Abstract: High dimensional data are nowadays encountered in various branches of science. Variable se...
... In this article, penalized likelihood approaches are proposed to handle these kinds of problems....
We investigate structured sparsity methods for variable selection in regression problems where the t...
Applying nonparametric variable selection criteria in nonlinear regression models generally requires...
In this work we are interested in the problems of supervised learning and variable selection when th...
Copyright © 2014 Ge-Jin Chu et al. This is an open access article distributed under the Creative Com...
The Global COE Program Mathematics-for-Industry Education & Research HubグローバルCOEプログラム「マス・フォア・インダストリ教...
In this work we are interested in the problems of supervised learning and variable selection when th...
The aim of variable selection is the identification of the most important predictors that define the...
The lasso algorithm for variable selection in linear models, intro- duced by Tibshirani, works by im...
MI: Global COE Program Education-and-Research Hub for Mathematics-for-IndustryグローバルCOEプログラム「マス・フォア・イ...
We investigate the problem of model selection for learning algorithms depending on a continuous para...
This diploma thesis focuses on regularization and variable selection in regres- sion models. Basics ...
International audienceAdditive varying coefficient models are a natural extension of multiple linear...
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
Abstract: High dimensional data are nowadays encountered in various branches of science. Variable se...
... In this article, penalized likelihood approaches are proposed to handle these kinds of problems....