Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are suppo...
AbstractIn the construction of a Bayesian network, it is always assumed that the variables starting ...
The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topi...
42 pages, 16 figures, 1 tableWe consider the problem of reducing the dimensions of parameters and da...
Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceiv...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal wit...
Hamiltonian Monte Carlo is a widely used algorithm for sampling from posterior distributions of comp...
<p>Recently, singular learning theory has been analyzed using algebraic geometry as its basis....
We propose a new variational family for Bayesian neural networks. We decompose the variational poste...
The main challenge in Bayesian models is to determine the posterior for the model parameters. Alread...
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. ...
Approximate marginal Bayesian computation and inference are developed for neural network models. The...
Neural networks are flexible models capable of capturing complicated data relationships. However, ne...
Learning parameters of a probabilistic model is a necessary step in most machine learning modeling t...
AbstractIn the construction of a Bayesian network, it is always assumed that the variables starting ...
The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topi...
42 pages, 16 figures, 1 tableWe consider the problem of reducing the dimensions of parameters and da...
Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceiv...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bay...
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal wit...
Hamiltonian Monte Carlo is a widely used algorithm for sampling from posterior distributions of comp...
<p>Recently, singular learning theory has been analyzed using algebraic geometry as its basis....
We propose a new variational family for Bayesian neural networks. We decompose the variational poste...
The main challenge in Bayesian models is to determine the posterior for the model parameters. Alread...
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. ...
Approximate marginal Bayesian computation and inference are developed for neural network models. The...
Neural networks are flexible models capable of capturing complicated data relationships. However, ne...
Learning parameters of a probabilistic model is a necessary step in most machine learning modeling t...
AbstractIn the construction of a Bayesian network, it is always assumed that the variables starting ...
The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topi...
42 pages, 16 figures, 1 tableWe consider the problem of reducing the dimensions of parameters and da...