Evaluating predictive performance is essential after fitting a model and leave-one-out cross-validation is a standard method. However, it is often not informative for a structured model with many possible prediction tasks. As a solution, leave-group-out cross-validation is an extension where the left-out-groups adapt to different prediction tasks. In this paper, we propose an automatic group construction procedure for leave-group-out cross-validation to estimate the predictive performance when the prediction task is not specified. We also propose an efficient approximation of leave-group-out cross-validation for latent Gaussian models. We implement both procedures in the R-INLA software
We study the problem of selecting a regularization parameter in penalized Gaussian graphical models....
L'objet de cette thèse est l'étude d'un certain type d'algorithmes de rééchantillonnage regroupés so...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
We describe a Monte Carlo investigation of a number of variants of cross-validation for the assessme...
We describe a Monte Carlo investigation of a number of variants of cross-validation for the assessme...
We generalize fast Gaussian process leave-one-out formulae to multiple-fold cross-validation, highli...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...
Reliable estimation of the classification performance of inferred predictive models is difficult whe...
We study the problem of selecting a regularization parameter in penalized Gaussian graphical models....
L'objet de cette thèse est l'étude d'un certain type d'algorithmes de rééchantillonnage regroupés so...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
We describe a Monte Carlo investigation of a number of variants of cross-validation for the assessme...
We describe a Monte Carlo investigation of a number of variants of cross-validation for the assessme...
We generalize fast Gaussian process leave-one-out formulae to multiple-fold cross-validation, highli...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...
Reliable estimation of the classification performance of inferred predictive models is difficult whe...
We study the problem of selecting a regularization parameter in penalized Gaussian graphical models....
L'objet de cette thèse est l'étude d'un certain type d'algorithmes de rééchantillonnage regroupés so...
Reliable estimation of the classification performance of learned predictive models is difficult, whe...