Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inference, but they are generally biased. We show that many recent versions of these methods (e.g. Chen et al. (2014)) cannot be corrected using Metropolis-Hastings rejection sampling, because their acceptance probability is always zero. We can fix this by employing a sampler with realizable backwards trajectories, such as Gradient-Guided Monte Carlo (Horowitz, 1991), which generalizes stochastic gradient Langevin dynamics (Welling and Teh, 2011) and Hamiltonian Monte Carlo. We show that this sampler can be used with stochastic gradients, yielding nonzero acceptance probabilities, which can be computed even across multiple steps
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and sta...
In this paper we propose a new framework for learning from large scale datasets based on iterative l...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining dis-tant proposals w...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining dis-tant proposals w...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining dis-tant proposals w...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals wi...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
Traditional algorithms for Bayesian posterior inference require processing the entire dataset in eac...
In applications of Gaussian processes where quantification of uncertainty is of primary interest, it...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and sta...
In this paper we propose a new framework for learning from large scale datasets based on iterative l...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining dis-tant proposals w...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining dis-tant proposals w...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining dis-tant proposals w...
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals wi...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
Traditional algorithms for Bayesian posterior inference require processing the entire dataset in eac...
In applications of Gaussian processes where quantification of uncertainty is of primary interest, it...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and sta...
In this paper we propose a new framework for learning from large scale datasets based on iterative l...