© CURRAN-CONFERENCE. All rights reserved. Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models wi...
Variational Message Passing facilitates automated variational inference in factorized probabilistic ...
© ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We develop a scalable deep non-parametric g...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...
A key advance in learning generative models is the use of amortized inference distributions that are...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
Inference models are a key component in scaling variational inference to deep latent variable models...
Variational autoencoders (VAEs) is a strong family of deep generative models based on variational in...
Variational inference provides a general optimization framework to approximate the posterior distrib...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
Despite the recent success in probabilistic modeling and their applications, generative models train...
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining ...
The core principle of Variational Inference (VI) is to convert the statistical inference problem of ...
Amortized variational inference, whereby the inferred latent variable posterior distributions are pa...
We introduce the variational filtering EM algorithm, a simple, general-purpose method for performing...
Variational Message Passing facilitates automated variational inference in factorized probabilistic ...
© ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We develop a scalable deep non-parametric g...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...
A key advance in learning generative models is the use of amortized inference distributions that are...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
Inference models are a key component in scaling variational inference to deep latent variable models...
Variational autoencoders (VAEs) is a strong family of deep generative models based on variational in...
Variational inference provides a general optimization framework to approximate the posterior distrib...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
Despite the recent success in probabilistic modeling and their applications, generative models train...
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining ...
The core principle of Variational Inference (VI) is to convert the statistical inference problem of ...
Amortized variational inference, whereby the inferred latent variable posterior distributions are pa...
We introduce the variational filtering EM algorithm, a simple, general-purpose method for performing...
Variational Message Passing facilitates automated variational inference in factorized probabilistic ...
© ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We develop a scalable deep non-parametric g...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...