We begin with a few historical remarks about what might be called the regularization class of statistical model building methods, which include penalized likelihood, support vector machines, robust and quantile nonparametric regression, etc., etc, and the problem of tuning them, spending a little too much time on methods related to Generalized Cross Validation. After that we discuss an approach to variable and pattern selection given very large attribute vectors, based on the LASSO (that is, l1 penalties) that differs from most approaches to this problem in that it is a mostly global, rather than a sequential, or greedy algorithm, for finding patterns in the data that most influence an outcome
International audienceThis paper tackles the problem of model complexity in the context of additive ...
This paper investigates two types of results that support the use of Generalized Cross Validation (G...
We describe a simple, efficient, permutation based procedure for selecting the penalty parameter in ...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
This paper is a selective review of the regularization methods scattered in statistics literature. W...
The lasso algorithm for variable selection in linear models, intro- duced by Tibshirani, works by im...
This diploma thesis focuses on regularization and variable selection in regres- sion models. Basics ...
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
Regularization methods allow one to handle a variety of inferential problems where there are more co...
Regression models are a form of supervised learning methods that are important for machine learning,...
Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in scie...
International audienceSeveral methods for variable selection have been proposed in model-based clust...
The main goal of this Thesis is to describe numerous statistical techniques that deal with high-dime...
We present a new family of model selection algorithms based on the resampling heuristics. It can be ...
MI: Global COE Program Education-and-Research Hub for Mathematics-for-IndustryグローバルCOEプログラム「マス・フォア・イ...
International audienceThis paper tackles the problem of model complexity in the context of additive ...
This paper investigates two types of results that support the use of Generalized Cross Validation (G...
We describe a simple, efficient, permutation based procedure for selecting the penalty parameter in ...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
This paper is a selective review of the regularization methods scattered in statistics literature. W...
The lasso algorithm for variable selection in linear models, intro- duced by Tibshirani, works by im...
This diploma thesis focuses on regularization and variable selection in regres- sion models. Basics ...
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
Regularization methods allow one to handle a variety of inferential problems where there are more co...
Regression models are a form of supervised learning methods that are important for machine learning,...
Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in scie...
International audienceSeveral methods for variable selection have been proposed in model-based clust...
The main goal of this Thesis is to describe numerous statistical techniques that deal with high-dime...
We present a new family of model selection algorithms based on the resampling heuristics. It can be ...
MI: Global COE Program Education-and-Research Hub for Mathematics-for-IndustryグローバルCOEプログラム「マス・フォア・イ...
International audienceThis paper tackles the problem of model complexity in the context of additive ...
This paper investigates two types of results that support the use of Generalized Cross Validation (G...
We describe a simple, efficient, permutation based procedure for selecting the penalty parameter in ...