We consider statistical procedures for feature selection defined by a family of regu-larization problems with convex piecewise linear loss functions and penalties of l1 or l∞ nature. For example, quantile regression and support vector machines with l1 norm penalty fall into the category. Computationally, the regularization problems are linear program-ming (LP) problems indexed by a single parameter, which are known as ‘parametric cost LP ’ or ‘parametric right-hand-side LP ’ in the optimization theory. Their solution paths can be generated with certain simplex algorithms. This work exploits the connection between the family of regularization methods and the parametric LP theory and lays out a general simplex algorithm and its variant for ge...
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ+z, where X ...
Approximate dynamic programming has been used successfully in a large variety of domains, but it rel...
<p>For a variety of regularized optimization problems in machine learning, algorithms computing the ...
The lasso algorithm for variable selection in linear models, intro- duced by Tibshirani, works by im...
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
One of the fundamental problems in statistical machine learning is the optimization problem under th...
We introduce a path following algorithm for "L" 1-regularized generalized linear models. The "L" 1-r...
Recently, pathfollowing algorithms for parametric optimization problems with piecewise linear soluti...
This thesis consists of three parts. In Chapter 1, we examine existing variable selection methods an...
Our objective is to develop formulations and al-gorithms for efficiently computing the feature se-le...
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ+z, where X ...
For a variety of regularized optimization problems in machine learning, algorithms computing the ent...
The selection of a proper model is an essential task in statistical learning. In general, for a give...
The selection of a proper model is an essential task in statistical learning. In general, for a give...
La sélection d’un modèle approprié est l’une des tâches essentielles de l’apprentissage statistique....
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ+z, where X ...
Approximate dynamic programming has been used successfully in a large variety of domains, but it rel...
<p>For a variety of regularized optimization problems in machine learning, algorithms computing the ...
The lasso algorithm for variable selection in linear models, intro- duced by Tibshirani, works by im...
The lasso algorithm for variable selection in linear models, introduced by Tibshirani, works by impo...
One of the fundamental problems in statistical machine learning is the optimization problem under th...
We introduce a path following algorithm for "L" 1-regularized generalized linear models. The "L" 1-r...
Recently, pathfollowing algorithms for parametric optimization problems with piecewise linear soluti...
This thesis consists of three parts. In Chapter 1, we examine existing variable selection methods an...
Our objective is to develop formulations and al-gorithms for efficiently computing the feature se-le...
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ+z, where X ...
For a variety of regularized optimization problems in machine learning, algorithms computing the ent...
The selection of a proper model is an essential task in statistical learning. In general, for a give...
The selection of a proper model is an essential task in statistical learning. In general, for a give...
La sélection d’un modèle approprié est l’une des tâches essentielles de l’apprentissage statistique....
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ+z, where X ...
Approximate dynamic programming has been used successfully in a large variety of domains, but it rel...
<p>For a variety of regularized optimization problems in machine learning, algorithms computing the ...