This paper shows how neural networks may be used to approximate the limited information posterior mean of a simulable model. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn the limited information posterior mean with good accuracy. The output of the net can be used as an estimator of the parameter, or, following Jiang et al. (2015), as an input to subsequent classical or Bayesian indirect inference estimation. Targeting the limited information posterior mean using neural nets is simpler, faster, and more successful than is targeting the full information posterior mean. Code to replicate th...
Science makes extensive use of simulations to model the world. Statistical inference identifies whic...
Many models in neuroscience, such as networks of spiking neurons or complex biophysical models, are ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
This paper shows how neural networks may be used to approximate the limited information posterior me...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
Approximate marginal Bayesian computation and inference are developed for neural network models. The...
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. ...
AbstractWe describe two specific examples of neural-Bayesian approaches for complex modeling tasks: ...
A fundamental task for both biological perception systems and human-engineered agents is to infer un...
Bayesian statistical inference provides a principled framework for linking mechanistic models of neu...
We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA i...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
Embodied agents, be they animals or robots, acquire information about the world through their senses...
Accepted at the ICML 2022 Workshop on Machine Learning for AstrophysicsICML 2022 Workshop on Machine...
Science makes extensive use of simulations to model the world. Statistical inference identifies whic...
Many models in neuroscience, such as networks of spiking neurons or complex biophysical models, are ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
This paper shows how neural networks may be used to approximate the limited information posterior me...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
Approximate marginal Bayesian computation and inference are developed for neural network models. The...
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. ...
AbstractWe describe two specific examples of neural-Bayesian approaches for complex modeling tasks: ...
A fundamental task for both biological perception systems and human-engineered agents is to infer un...
Bayesian statistical inference provides a principled framework for linking mechanistic models of neu...
We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA i...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
Embodied agents, be they animals or robots, acquire information about the world through their senses...
Accepted at the ICML 2022 Workshop on Machine Learning for AstrophysicsICML 2022 Workshop on Machine...
Science makes extensive use of simulations to model the world. Statistical inference identifies whic...
Many models in neuroscience, such as networks of spiking neurons or complex biophysical models, are ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...