How do we compare between hypotheses that are entirely consistent with observations? The marginal likelihood (aka Bayesian evidence), which represents the probability of generating our observations from a prior, provides a distinctive approach to this foundational question, automatically encoding Occam's razor. Although it has been observed that the marginal likelihood can overfit and is sensitive to prior assumptions, its limitations for hyperparameter learning and discrete model comparison have not been thoroughly investigated. We first revisit the appealing properties of the marginal likelihood for learning constraints and hypothesis testing. We then highlight the conceptual and practical issues in using the marginal likelihood as a prox...
We argue that human inductive generalization is best explained in a Bayesian framework, rather than ...
2020, The Psychonomic Society, Inc. Recent advances in Markov chain Monte Carlo (MCMC) extend the sc...
AbstractThis note describes a Bayesian model selection or optimization procedure for post hoc infere...
This is an up-to-date introduction to, and overview of, marginal likelihood computation for model se...
The Laplace approximation yields a tractable marginal likelihood for Bayesian neural networks. This ...
A multi-level model allows the possibility of marginalization across levels in different ways, yield...
In a Bayesian analysis, different models can be compared on the basis of theexpected or marginal lik...
Since Bayesian learning for neural networks was introduced by MacKay it was applied to real world pr...
<p>Bayesian variable selection often assumes normality, but the effects of model misspecification ar...
Bayesian analysis methods often use some form of iterative simulation such as Monte Carlo computatio...
In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate mod...
Modern statistical software and machine learning libraries are enabling semi-automated statistical i...
Bayesian workflows often require the introduction of nuisance parameters, yet for core science model...
Understanding how feature learning affects generalization is among the foremost goals of modern deep...
Bayesian model comparison requires the specification of a prior distribution on the parameter space...
We argue that human inductive generalization is best explained in a Bayesian framework, rather than ...
2020, The Psychonomic Society, Inc. Recent advances in Markov chain Monte Carlo (MCMC) extend the sc...
AbstractThis note describes a Bayesian model selection or optimization procedure for post hoc infere...
This is an up-to-date introduction to, and overview of, marginal likelihood computation for model se...
The Laplace approximation yields a tractable marginal likelihood for Bayesian neural networks. This ...
A multi-level model allows the possibility of marginalization across levels in different ways, yield...
In a Bayesian analysis, different models can be compared on the basis of theexpected or marginal lik...
Since Bayesian learning for neural networks was introduced by MacKay it was applied to real world pr...
<p>Bayesian variable selection often assumes normality, but the effects of model misspecification ar...
Bayesian analysis methods often use some form of iterative simulation such as Monte Carlo computatio...
In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate mod...
Modern statistical software and machine learning libraries are enabling semi-automated statistical i...
Bayesian workflows often require the introduction of nuisance parameters, yet for core science model...
Understanding how feature learning affects generalization is among the foremost goals of modern deep...
Bayesian model comparison requires the specification of a prior distribution on the parameter space...
We argue that human inductive generalization is best explained in a Bayesian framework, rather than ...
2020, The Psychonomic Society, Inc. Recent advances in Markov chain Monte Carlo (MCMC) extend the sc...
AbstractThis note describes a Bayesian model selection or optimization procedure for post hoc infere...