Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered pa-rameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior in-ference; we show that they are often comple-mentary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estima-tor of the marginal likelihood can be used for learning the parameters. Theoretical results are s...
Understanding the relationship between connectionist and probabilistic models is important for evalu...
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes...
AbstractIn the construction of a Bayesian network, it is always assumed that the variables starting ...
Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceiv...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal wit...
Hamiltonian Monte Carlo is a widely used algorithm for sampling from posterior distributions of comp...
<p>Recently, singular learning theory has been analyzed using algebraic geometry as its basis....
We propose a new variational family for Bayesian neural networks. We decompose the variational poste...
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. ...
Neural networks are flexible models capable of capturing complicated data relationships. However, ne...
Approximate marginal Bayesian computation and inference are developed for neural network models. The...
The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topi...
The main challenge in Bayesian models is to determine the posterior for the model parameters. Alread...
Understanding the relationship between connectionist and probabilistic models is important for evalu...
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes...
AbstractIn the construction of a Bayesian network, it is always assumed that the variables starting ...
Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceiv...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal wit...
Hamiltonian Monte Carlo is a widely used algorithm for sampling from posterior distributions of comp...
<p>Recently, singular learning theory has been analyzed using algebraic geometry as its basis....
We propose a new variational family for Bayesian neural networks. We decompose the variational poste...
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. ...
Neural networks are flexible models capable of capturing complicated data relationships. However, ne...
Approximate marginal Bayesian computation and inference are developed for neural network models. The...
The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topi...
The main challenge in Bayesian models is to determine the posterior for the model parameters. Alread...
Understanding the relationship between connectionist and probabilistic models is important for evalu...
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes...
AbstractIn the construction of a Bayesian network, it is always assumed that the variables starting ...