peer-reviewedWe consider several least absolute shrinkage and selection operator (LASSO) penalized likelihood approaches in high dimensional contingency tables and with hierarchical log-linear models. These include the proposal of a parametric, analytic, convex, approximation to the LASSO. We compare them with "classical" stepwise search algorithms. The results show that both backwards elimination and forward selection algorithms select more parsimonious (i.e. sparser) models which are always hierarchical, unlike the competing LASSO techniques
In this thesis, we consider the linear regression model in the high dimensional setup. In particular...
We consider a linear regression problem in a high dimensional setting where the number of covariates...
In this paper we study the asymptotic properties of the adaptive Lasso estimate in high dimensional ...
peer-reviewedWe develop a Smooth Lasso for sparse, high dimensional, contingency tables and compare ...
In multi-dimensional contingency tables sparse data occur frequently. For example, with bi-nary como...
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression...
During the last few years, a great deal attention has been focused on lasso and Dantzig selector in ...
In this paper we develop inference for high dimensional linear models, with serially correlated erro...
Penalized logistic regression is extremely useful for binary classiffication with a large number of ...
We consider the linear regression problem. We propose the S-Lasso procedure to estimate the unknown ...
Note: new title. Former title = Post-ℓ1-Penalized Estimators in High-Dimensional Linear Regression ...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
The application of the lasso is espoused in high-dimensional settings where only a small number of t...
SUMMARY We propose a pivotal method for estimating high-dimensional sparse linear regression models,...
We consider the least-square linear regression problem with regularization by the $\ell^1$-norm, a p...
In this thesis, we consider the linear regression model in the high dimensional setup. In particular...
We consider a linear regression problem in a high dimensional setting where the number of covariates...
In this paper we study the asymptotic properties of the adaptive Lasso estimate in high dimensional ...
peer-reviewedWe develop a Smooth Lasso for sparse, high dimensional, contingency tables and compare ...
In multi-dimensional contingency tables sparse data occur frequently. For example, with bi-nary como...
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression...
During the last few years, a great deal attention has been focused on lasso and Dantzig selector in ...
In this paper we develop inference for high dimensional linear models, with serially correlated erro...
Penalized logistic regression is extremely useful for binary classiffication with a large number of ...
We consider the linear regression problem. We propose the S-Lasso procedure to estimate the unknown ...
Note: new title. Former title = Post-ℓ1-Penalized Estimators in High-Dimensional Linear Regression ...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
The application of the lasso is espoused in high-dimensional settings where only a small number of t...
SUMMARY We propose a pivotal method for estimating high-dimensional sparse linear regression models,...
We consider the least-square linear regression problem with regularization by the $\ell^1$-norm, a p...
In this thesis, we consider the linear regression model in the high dimensional setup. In particular...
We consider a linear regression problem in a high dimensional setting where the number of covariates...
In this paper we study the asymptotic properties of the adaptive Lasso estimate in high dimensional ...