Many methods for machine learning rely on approximate inference from intractable probability distributions. Variational inference approximates such distributions by tractable models that can be subsequently used for approximate inference. Learning sufficiently accurate approximations requires a rich model family and careful exploration of the relevant modes of the target distribution. We propose a method for learning accurate GMM approximations of intractable probability distributions based on insights from policy search by using information-geometric trust regions for principled exploration. For efficient improvement of the GMM approximation, we derive a lower bound on the corresponding optimization objective enabling us to update the comp...
Variational inference provides a general optimization framework to approximate the posterior distrib...
Computing the partition function of a graphical model is a fundamental task in probabilistic inferen...
Variational methods for model comparison have become popular in the neural computing/machine learni...
Many methods for machine learning rely on approximate inference from intractable probability distrib...
Inference from complex distributions is a common problem in machine learning needed for many Bayesia...
The learning of variational inference can be widely seen as first estimating the class assignment va...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
<p>One of the core problems of modern statistics is to approximate difficult-to-compute probability ...
Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubi...
Approximating probability densities is a core problem in Bayesian statistics, where the inference in...
We propose a new class of learning algorithms that combines variational approximation and Markov cha...
<div><p>Mixtures of linear mixed models (MLMMs) are useful for clustering grouped data and can be es...
The Bayesian framework for machine learning allows for the incorporation of prior knowledge in a coh...
Boosting variational inference (BVI) approximates Bayesian posterior distributions by iteratively bu...
Variational inference approximates the posterior distribution of a probabilistic model with a parame...
Variational inference provides a general optimization framework to approximate the posterior distrib...
Computing the partition function of a graphical model is a fundamental task in probabilistic inferen...
Variational methods for model comparison have become popular in the neural computing/machine learni...
Many methods for machine learning rely on approximate inference from intractable probability distrib...
Inference from complex distributions is a common problem in machine learning needed for many Bayesia...
The learning of variational inference can be widely seen as first estimating the class assignment va...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
<p>One of the core problems of modern statistics is to approximate difficult-to-compute probability ...
Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubi...
Approximating probability densities is a core problem in Bayesian statistics, where the inference in...
We propose a new class of learning algorithms that combines variational approximation and Markov cha...
<div><p>Mixtures of linear mixed models (MLMMs) are useful for clustering grouped data and can be es...
The Bayesian framework for machine learning allows for the incorporation of prior knowledge in a coh...
Boosting variational inference (BVI) approximates Bayesian posterior distributions by iteratively bu...
Variational inference approximates the posterior distribution of a probabilistic model with a parame...
Variational inference provides a general optimization framework to approximate the posterior distrib...
Computing the partition function of a graphical model is a fundamental task in probabilistic inferen...
Variational methods for model comparison have become popular in the neural computing/machine learni...