We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of probability measures, with Kullback-Leibler (KL) divergence as the objective functional. We show that an under-damped form of the Langevin algorithm performs accelerated gradient descent in this metric. To characterize the convergence of the algorithm, we construct a Lyapunov functional and exploit hypocoercivity of the underdamped Langevin algorithm. As an application, we show that accelerated rates can be obtained for a class of nonconvex functions with the Langevin algorithm
Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inferen...
A new methodology is presented for the construction of control variates to reduce the variance of ad...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
This thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in hi...
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC)...
We introduce new Gaussian proposals to improve the efficiency of the standard Hastings-Metropolis al...
Nesterov's Accelerated Gradient (NAG) for optimization has better performance than its continuous ti...
Sampling from probability distributions is a problem of significant importance in Statistics and Mac...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Langevin dynamics-based sampling algorithms are arguably among the most widelyused Markov Chain Mont...
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and d...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and d...
Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inferen...
A new methodology is presented for the construction of control variates to reduce the variance of ad...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
This thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in hi...
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC)...
We introduce new Gaussian proposals to improve the efficiency of the standard Hastings-Metropolis al...
Nesterov's Accelerated Gradient (NAG) for optimization has better performance than its continuous ti...
Sampling from probability distributions is a problem of significant importance in Statistics and Mac...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Langevin dynamics-based sampling algorithms are arguably among the most widelyused Markov Chain Mont...
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and d...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and d...
Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inferen...
A new methodology is presented for the construction of control variates to reduce the variance of ad...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...