We study the problem of selection of regularization parameter in penalized Gaussian graphical models. When the goal is to obtain a model with good predicting power, cross validation is the gold standard. We present a new estimator of Kullback-Leibler loss in Gaussian Graphical model which provides a computationally fast alternative to cross-validation. The estimator is obtained by approximating leave-one-out-cross validation. Our approach is demonstrated on simulated data sets for various types of graphs. The proposed formula exhibits superior performance, especially in the typical small sample size scenario, compared to other available alternatives to cross validation, such as Akaike\u2019s information criterion and Generalized approximate...
We begin with a few historical remarks about what might be called the regularization class of statis...
High-dimensional data refers to the case in which the number of parameters is of one or more order g...
We present a new family of model selection algorithms based on the resampling heuristics. It can be ...
We study the problem of selection of regularization parameter in penalized Gaussian graphical models...
Penalized inference of Gaussian graphical models is a way to assess the conditional independence str...
This paper introduces an estimator of the relative directed distance between an estimated model and ...
This paper investigates two types of results that support the use of Generalized Cross Validation (G...
This paper deals with the bias correction of the cross-validation (CV) criterion for a choice of mod...
The majority of methods for sparse precision matrix estimation rely on computationally expensive pro...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
<p>Gaussian graphical models represent the underlying graph structure of conditional dependence betw...
Recent years have seen great advances in using Gaussian graphical models to characterize the conditi...
Graphical models have established themselves as fundamental tools through which to understand comple...
A linear mixed model is a useful technique to explain observations by regarding them as realizations...
JdS 2020 have cancelled due to the CoViD-19 pandemic. This article, however, has been duly peer-rev...
We begin with a few historical remarks about what might be called the regularization class of statis...
High-dimensional data refers to the case in which the number of parameters is of one or more order g...
We present a new family of model selection algorithms based on the resampling heuristics. It can be ...
We study the problem of selection of regularization parameter in penalized Gaussian graphical models...
Penalized inference of Gaussian graphical models is a way to assess the conditional independence str...
This paper introduces an estimator of the relative directed distance between an estimated model and ...
This paper investigates two types of results that support the use of Generalized Cross Validation (G...
This paper deals with the bias correction of the cross-validation (CV) criterion for a choice of mod...
The majority of methods for sparse precision matrix estimation rely on computationally expensive pro...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
<p>Gaussian graphical models represent the underlying graph structure of conditional dependence betw...
Recent years have seen great advances in using Gaussian graphical models to characterize the conditi...
Graphical models have established themselves as fundamental tools through which to understand comple...
A linear mixed model is a useful technique to explain observations by regarding them as realizations...
JdS 2020 have cancelled due to the CoViD-19 pandemic. This article, however, has been duly peer-rev...
We begin with a few historical remarks about what might be called the regularization class of statis...
High-dimensional data refers to the case in which the number of parameters is of one or more order g...
We present a new family of model selection algorithms based on the resampling heuristics. It can be ...