Bayesian hierarchical models are increasingly used in many applications. In parallel, the desire to check the predictive capabilities of these models grows. However, classic Bayesian tools for model selection, as the marginal likelihood of the models, are often unavailable analytically, and the models have to be estimated with MCMC methodology. This also renders leave-one-out cross-validation of the models infeasible for realistically sized data sets. In this thesis we therefore propose approximate cross-validation sampling schemes based on work by Marshall and Spiegelhalter (2003), for two model classes: conjugate change point models are applied to time series, while normal linear mixed models are used to analyze longitudinal data. The qua...
The goal of this paper is to compare several widely used Bayesian model selection methods in practic...
In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate mod...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...
This thesis will be concerned with application of a cross-validation criterion to the choice and as...
Longitudinal models are commonly used for studying data collected on individuals repeatedly through ...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Abstract: A natural method for approximating out-of-sample predictive evaluation is leave-one-out cr...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
In the development of Bayesian model specification for inference and prediction we focus on the con...
The goal of this paper is to compare several widely used Bayesian model selection methods in practic...
The goal of this paper is to compare several widely used Bayesian model selection methods in practic...
In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate mod...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...
This thesis will be concerned with application of a cross-validation criterion to the choice and as...
Longitudinal models are commonly used for studying data collected on individuals repeatedly through ...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model compa...
Abstract: A natural method for approximating out-of-sample predictive evaluation is leave-one-out cr...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
Model inference, such as model comparison, model checking, and model selection, is an important part...
In the development of Bayesian model specification for inference and prediction we focus on the con...
The goal of this paper is to compare several widely used Bayesian model selection methods in practic...
The goal of this paper is to compare several widely used Bayesian model selection methods in practic...
In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate mod...
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-i...