Variable selection and model choice are of major concern in many statistical applications, especially in high-dimensional regression models. Boosting is a convenient statistical method that combines model fitting with intrinsic model selection. We investigate the impact of base-learner specification on the performance of boosting as a model selection procedure. We show that variable selection may be biased if the covariates are of different nature. Important examples are models combining continuous and categorical covariates, especially if the number of categories is large. In this case, least squares base-learners offer increased flexibility for the categorical covariate and lead to a preference even if the categorical covariate is non-inf...
As dimensions of datasets in predictive modelling continue to grow, feature selection becomes increa...
The performances of penalized least squares approaches profoundly depend on the selection of the tun...
Model selection strategies for machine learning algorithms typically involve the numerical opti-misa...
Variable selection and model choice are of major concern in many statistical applications, especiall...
This publication is with permission of the rights owner (Sage) freely accessible.We present a new pr...
Model choice and variable selection are issues of major concern in practical regression analyses. We...
We present a statistical perspective on boosting. Special emphasis is given to estimating potentiall...
University of Minnesota Ph.D. dissertation. July 2017. Major: Biostatistics. Advisors: Julian Wolfso...
Boosting algorithms were originally developed for machine learning but were later adapted to estimat...
We present a new variable selection method based on model-based gradient boosting and randomly permu...
Model selection strategies for machine learning algorithms typically involve the numerical optimisat...
Abstract Background Statistical boosting is a computational approach to select and estimate interpre...
The use of generalized additive models in statistical data analysis suffers from the restriction to ...
Selecting a learning algorithm to implement for a particular application on the basis of performance...
Summary: The R add-on package mboost implements functional gradient descent algorithms (boosting) fo...
As dimensions of datasets in predictive modelling continue to grow, feature selection becomes increa...
The performances of penalized least squares approaches profoundly depend on the selection of the tun...
Model selection strategies for machine learning algorithms typically involve the numerical opti-misa...
Variable selection and model choice are of major concern in many statistical applications, especiall...
This publication is with permission of the rights owner (Sage) freely accessible.We present a new pr...
Model choice and variable selection are issues of major concern in practical regression analyses. We...
We present a statistical perspective on boosting. Special emphasis is given to estimating potentiall...
University of Minnesota Ph.D. dissertation. July 2017. Major: Biostatistics. Advisors: Julian Wolfso...
Boosting algorithms were originally developed for machine learning but were later adapted to estimat...
We present a new variable selection method based on model-based gradient boosting and randomly permu...
Model selection strategies for machine learning algorithms typically involve the numerical optimisat...
Abstract Background Statistical boosting is a computational approach to select and estimate interpre...
The use of generalized additive models in statistical data analysis suffers from the restriction to ...
Selecting a learning algorithm to implement for a particular application on the basis of performance...
Summary: The R add-on package mboost implements functional gradient descent algorithms (boosting) fo...
As dimensions of datasets in predictive modelling continue to grow, feature selection becomes increa...
The performances of penalized least squares approaches profoundly depend on the selection of the tun...
Model selection strategies for machine learning algorithms typically involve the numerical opti-misa...