Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training. In this work, we present a doubly-amortized variational inference procedure as a way to address this challenge. By sharing computation across not only a set of query inputs, but also a set of different, related probabilistic models, we learn transferable latent representations that generalize across several related distributions. In particular, given a set of distributions over images, we find the learned representations to transfer to different data transformations. We em...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
In this paper, we introduce a new form of amortized variational inference by using the forward KL di...
Many methods for machine learning rely on approximate inference from intractable probability distrib...
Despite the recent success in probabilistic modeling and their applications, generative models train...
International audienceWe propose a novel amortized variational inference scheme for an empirical Bay...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability...
© CURRAN-CONFERENCE. All rights reserved. Amortized variational inference (AVI) replaces instance-sp...
Variational inference provides a general optimization framework to approximate the posterior distrib...
We propose probabilistic task modelling – a generative probabilistic model for collections of tasks ...
A key advance in learning generative models is the use of amortized inference distributions that are...
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining ...
We introduce kernels with random Fourier features in the meta-learning framework for few-shot learni...
We propose probabilistic task modelling – a generative probabilistic model for collections of tasks ...
Amortized variational inference, whereby the inferred latent variable posterior distributions are pa...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
In this paper, we introduce a new form of amortized variational inference by using the forward KL di...
Many methods for machine learning rely on approximate inference from intractable probability distrib...
Despite the recent success in probabilistic modeling and their applications, generative models train...
International audienceWe propose a novel amortized variational inference scheme for an empirical Bay...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability...
© CURRAN-CONFERENCE. All rights reserved. Amortized variational inference (AVI) replaces instance-sp...
Variational inference provides a general optimization framework to approximate the posterior distrib...
We propose probabilistic task modelling – a generative probabilistic model for collections of tasks ...
A key advance in learning generative models is the use of amortized inference distributions that are...
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining ...
We introduce kernels with random Fourier features in the meta-learning framework for few-shot learni...
We propose probabilistic task modelling – a generative probabilistic model for collections of tasks ...
Amortized variational inference, whereby the inferred latent variable posterior distributions are pa...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
In this paper, we introduce a new form of amortized variational inference by using the forward KL di...
Many methods for machine learning rely on approximate inference from intractable probability distrib...