Bayesian inference problems require sampling or approximating high-dimensional probability distributions. The focus of this paper is on the recently introduced Stein variational gradient descent methodology, a class of algorithms that rely on iterated steepest descent steps with respect to a reproducing kernel Hilbert space norm. This construction leads to interacting particle systems, the mean-field limit of which is a gradient flow on the space of probability distributions equipped with a certain geometrical structure. We leverage this viewpoint to shed some light on the convergence properties of the algorithm, in particular addressing the problem of choosing a suitable positive definite kernel function. Our analysis leads us to consideri...
Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo ...
The rapid progress in machine learning in recent years has been based on a highly productive connect...
We study mean-field variational inference in a Bayesian linear model when the sample size n is compa...
Bayesian inference problems require sampling or approximating high-dimensional probability distribut...
Particle-based variational inference (VI) minimizes the KL divergence between model samples and the ...
Approximate Bayesian inference estimates descriptors of an intractable target distribution - in esse...
Stein Variational Gradient Descent (SVGD) is a popular sampling algorithm used in various machine le...
We provide the first finite-particle convergence rate for Stein variational gradient descent (SVGD),...
In Bayesian inference, the posterior distributions are difficult to obtain analytically for complex ...
Stein Variational Gradient Descent (SVGD) is an algorithm for sampling from a target density which i...
This paper introduces a novel variational inference (VI) method with Bayesian and gradient descent t...
Sampling a probability distribution with an unknown normalization constant is a fundamental problem ...
Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo ...
Stein discrepancies have emerged as a powerful statistical tool, being applied to fundamental statis...
Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI) has emerged as a cent...
Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo ...
The rapid progress in machine learning in recent years has been based on a highly productive connect...
We study mean-field variational inference in a Bayesian linear model when the sample size n is compa...
Bayesian inference problems require sampling or approximating high-dimensional probability distribut...
Particle-based variational inference (VI) minimizes the KL divergence between model samples and the ...
Approximate Bayesian inference estimates descriptors of an intractable target distribution - in esse...
Stein Variational Gradient Descent (SVGD) is a popular sampling algorithm used in various machine le...
We provide the first finite-particle convergence rate for Stein variational gradient descent (SVGD),...
In Bayesian inference, the posterior distributions are difficult to obtain analytically for complex ...
Stein Variational Gradient Descent (SVGD) is an algorithm for sampling from a target density which i...
This paper introduces a novel variational inference (VI) method with Bayesian and gradient descent t...
Sampling a probability distribution with an unknown normalization constant is a fundamental problem ...
Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo ...
Stein discrepancies have emerged as a powerful statistical tool, being applied to fundamental statis...
Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI) has emerged as a cent...
Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo ...
The rapid progress in machine learning in recent years has been based on a highly productive connect...
We study mean-field variational inference in a Bayesian linear model when the sample size n is compa...