We study generalized Bayesian inference under misspecification, i.e. when the model is `wrong but useful'. Generalized Bayes equips the likelihood with a learning rate η. We show that for generalized linear models (GLMs), η-generalized Bayes concentrates around the best approximation of the truth within the model for specific η≠1, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We then derive MCMC samplers for generalized Bayesian lasso and logistic regression, and give examples of both simulated and real-world data in which generalized Bayes outperforms standard Bayes by a vast margin
Due to the ease of modern data collection, applied statisticians often have access to a large set of...
Generalized linear models (GLMs) are popular for data-analysis in almost all quantitative sciences, ...
This thesis explores how a Bayesian should update their beliefs in the knowledge that any model ava...
We study generalized Bayesian inference under misspecification, i.e. when the model is `wrong but us...
We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but us...
We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but us...
We empirically show that Bayesian inference can be inconsistent under misspecification in simple lin...
We empirically show that Bayesian inference can be inconsistent under misspecification in simple li...
The concept of safe Bayesian inference [ 4] with learning rates [5 ] has recently sparked a lot of r...
We empirically show that Bayesian inference can be inconsistent under misspecification in simple lin...
Bayesian model selection poses two main challenges: the specification of parameter priors for all mo...
Generalized linear models (GLMs) - such as logistic regression, Poisson regression, and robust regre...
This study takes up inference in linear models with generalized error and generalized t distribution...
Bayesian model selection poses two main challenges: the specification of parameter priors for all mo...
We describe a Bayesian learning algorithm for Robust General Linear Models (RGLMs). The noise is mod...
Due to the ease of modern data collection, applied statisticians often have access to a large set of...
Generalized linear models (GLMs) are popular for data-analysis in almost all quantitative sciences, ...
This thesis explores how a Bayesian should update their beliefs in the knowledge that any model ava...
We study generalized Bayesian inference under misspecification, i.e. when the model is `wrong but us...
We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but us...
We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but us...
We empirically show that Bayesian inference can be inconsistent under misspecification in simple lin...
We empirically show that Bayesian inference can be inconsistent under misspecification in simple li...
The concept of safe Bayesian inference [ 4] with learning rates [5 ] has recently sparked a lot of r...
We empirically show that Bayesian inference can be inconsistent under misspecification in simple lin...
Bayesian model selection poses two main challenges: the specification of parameter priors for all mo...
Generalized linear models (GLMs) - such as logistic regression, Poisson regression, and robust regre...
This study takes up inference in linear models with generalized error and generalized t distribution...
Bayesian model selection poses two main challenges: the specification of parameter priors for all mo...
We describe a Bayesian learning algorithm for Robust General Linear Models (RGLMs). The noise is mod...
Due to the ease of modern data collection, applied statisticians often have access to a large set of...
Generalized linear models (GLMs) are popular for data-analysis in almost all quantitative sciences, ...
This thesis explores how a Bayesian should update their beliefs in the knowledge that any model ava...