We develop unbiased implicit variational inference (UIVI), a method that expands the applicability of variational inference by defining an expressive variational family. UIVI considers an implicit variational distribution obtained in a hierarchical manner using a simple reparameterizable distribution whose variational parameters are defined by arbitrarily flexible deep neural networks. Unlike previous works, UIVI directly optimizes the evidence lower bound (ELBO) rather than an approximation to the ELBO. We demonstrate UIVI on several models, including Bayesian multinomial logistic regression and variational autoencoders, and show that UIVI achieves both tighter ELBO and better predictive performance than existing approaches at a similar co...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Variational inference is a scalable technique for approximate Bayesian inference. Deriving variation...
Variational inference (VI) or Variational Bayes (VB) is a popular alternative to MCMC, which doesn\u...
Variational inference provides a general optimization framework to approximate the posterior distrib...
This paper introduces the $\textit{variational Rényi bound}$ (VR) that extends traditional variation...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Bayesian...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
Item does not contain fulltextStochastic variational inference offers an attractive option as a defa...
Having access to accurate confidence levels along with the predictions allows to determine whether m...
Implicit processes (IPs) are a generalization of Gaussian processes (GPs). IPs may lack a closed-for...
Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines...
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multi...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Variational inference is a scalable technique for approximate Bayesian inference. Deriving variation...
Variational inference (VI) or Variational Bayes (VB) is a popular alternative to MCMC, which doesn\u...
Variational inference provides a general optimization framework to approximate the posterior distrib...
This paper introduces the $\textit{variational Rényi bound}$ (VR) that extends traditional variation...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Bayesian...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
Item does not contain fulltextStochastic variational inference offers an attractive option as a defa...
Having access to accurate confidence levels along with the predictions allows to determine whether m...
Implicit processes (IPs) are a generalization of Gaussian processes (GPs). IPs may lack a closed-for...
Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines...
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multi...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...