Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO-CV) is a general approach for assessing the generalizability of a model, but unfortunately, LOO-CV does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO-CV model evaluation for large data. We provide both theoretical and empirical results showing good properties for large data
Abstract: A natural method for approximating out-of-sample predictive evaluation is leave-one-out cr...
The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out...
The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Recently, new methods for model assessment, based on subsampling and posterior approximations, have ...
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive ...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive ...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on t...
Abstract: A natural method for approximating out-of-sample predictive evaluation is leave-one-out cr...
The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out...
The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Recently, new methods for model assessment, based on subsampling and posterior approximations, have ...
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive ...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive ...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validati...
Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on t...
Abstract: A natural method for approximating out-of-sample predictive evaluation is leave-one-out cr...
The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out...
The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out...