High-dimensional matrix data are common in modern data analysis. Simply applying Lasso after vectorizing the observations ignores essential row and column information inherent in such data, rendering variable selection results less useful. In this paper, we propose a new approach that takes advantage of the structural information. The estimate is easy to compute and possesses favorable theoretical properties. Compared with Lasso, the new estimate can recover the sparse structure in both rows and columns under weaker assumptions. Simulations demonstrate its better performance in variable selection and convergence rate, compared to methods that ignore such information. An application to a dataset in medical science shows the usefulness of the...
Regression from high dimensional observation vectors is par-ticularly difficult when training data i...
Binary logistic regression with a sparsity constraint on the solution plays a vital role in many hig...
Continuous variable selection using shrinkage procedures have recently been considered as favorable ...
The Lasso is an attractive technique for regularization and variable selection for high-dimensional ...
High-dimensional datasets, where the number of measured variables is larger than the sample size, ar...
We propose a new sparse regression method called the component lasso, based on a simple idea. The me...
In this paper, we discuss a parsimonious approach to estimation of high-dimensional covariance matri...
In more and more applications, a quantity of interest may depend on several covariates, with at leas...
Variable selection and estimation for high-dimensional data have become a topic of foremost importan...
<p>We consider a high-dimensional linear regression problem, where the covariates (features) are ord...
The abundance of available digital big data has created new challenges in identifying relevant varia...
In this paper, we consider the Group Lasso estimator of the covariance matrix of a stochastic proces...
For multiple index models, it has recently been shown that the sliced inverse regression (SIR) is co...
With the advancement of technology in data collection, repeated measurements with high dimensional c...
Regression models are a form of supervised learning methods that are important for machine learning,...
Regression from high dimensional observation vectors is par-ticularly difficult when training data i...
Binary logistic regression with a sparsity constraint on the solution plays a vital role in many hig...
Continuous variable selection using shrinkage procedures have recently been considered as favorable ...
The Lasso is an attractive technique for regularization and variable selection for high-dimensional ...
High-dimensional datasets, where the number of measured variables is larger than the sample size, ar...
We propose a new sparse regression method called the component lasso, based on a simple idea. The me...
In this paper, we discuss a parsimonious approach to estimation of high-dimensional covariance matri...
In more and more applications, a quantity of interest may depend on several covariates, with at leas...
Variable selection and estimation for high-dimensional data have become a topic of foremost importan...
<p>We consider a high-dimensional linear regression problem, where the covariates (features) are ord...
The abundance of available digital big data has created new challenges in identifying relevant varia...
In this paper, we consider the Group Lasso estimator of the covariance matrix of a stochastic proces...
For multiple index models, it has recently been shown that the sliced inverse regression (SIR) is co...
With the advancement of technology in data collection, repeated measurements with high dimensional c...
Regression models are a form of supervised learning methods that are important for machine learning,...
Regression from high dimensional observation vectors is par-ticularly difficult when training data i...
Binary logistic regression with a sparsity constraint on the solution plays a vital role in many hig...
Continuous variable selection using shrinkage procedures have recently been considered as favorable ...