The greedy search approach to variable selection in regression trees with constant fits is considered. At each node, the method usually compares the maximally selected statistic associated with each variable and selects the variable with the largest value to form the split. This method is shown to have selection bias, if predictor variables have different numbers of missing values and the bias can be corrected by comparing the corresponding P-values instead. Methods related to some change-point problems are used to compute the P-values and their performances are studied
Classification trees are a popular statistical tool with multiple applications. Recent advancements ...
We consider the problem of model (or variable) selection in the classical regression model using the...
We review variable selection and variable screening in high-dimensional linear models. Thereby, a ma...
The maximally selected statistic approach in building tree models is shown to be a cause of variable...
The maximally selected statistic approach in building tree models is shown to be a cause of variable...
With advanced capability in data collection, applications of linear regression analysis now often in...
Within the design of a machine learning-based solution for classification or regression problems, va...
We present a new Stata program, vselect, that helps users perform variable selection after performin...
The Gini gain is one of the most common variable selection criteria in machine learning. We derive t...
Variable selection problem is one of the important problems in regression analysis. Over the years, ...
[[abstract]]In this paper, the exhaustive search principle used in functional trees for classifying ...
This paper deals with variable selection in regression and binary classification framework...
This article proposes a variable selection method termed “subtle uprooting” for linear regression. I...
<div><p>In this article, we propose a new data mining algorithm, by which one can both capture the n...
[[abstract]]A variable selection method for constructing decision trees with rank data is proposed. ...
Classification trees are a popular statistical tool with multiple applications. Recent advancements ...
We consider the problem of model (or variable) selection in the classical regression model using the...
We review variable selection and variable screening in high-dimensional linear models. Thereby, a ma...
The maximally selected statistic approach in building tree models is shown to be a cause of variable...
The maximally selected statistic approach in building tree models is shown to be a cause of variable...
With advanced capability in data collection, applications of linear regression analysis now often in...
Within the design of a machine learning-based solution for classification or regression problems, va...
We present a new Stata program, vselect, that helps users perform variable selection after performin...
The Gini gain is one of the most common variable selection criteria in machine learning. We derive t...
Variable selection problem is one of the important problems in regression analysis. Over the years, ...
[[abstract]]In this paper, the exhaustive search principle used in functional trees for classifying ...
This paper deals with variable selection in regression and binary classification framework...
This article proposes a variable selection method termed “subtle uprooting” for linear regression. I...
<div><p>In this article, we propose a new data mining algorithm, by which one can both capture the n...
[[abstract]]A variable selection method for constructing decision trees with rank data is proposed. ...
Classification trees are a popular statistical tool with multiple applications. Recent advancements ...
We consider the problem of model (or variable) selection in the classical regression model using the...
We review variable selection and variable screening in high-dimensional linear models. Thereby, a ma...