We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and discretizations of the underdamped Langevin process. A detailed analysis and optimization of the parameters is conducted in the Gaussian case. Then, a stochastic gradient version of the samplers is considered, for which dimension-free convergence rates are established for log-concave smooth targets, gathering in a unified framework previous results on both processes. Both results indicate that partial refreshments of the velocity are more efficient than standard full refreshments
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We implement the simple method to accelerate the convergence speed to the steady state and enhance t...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and d...
Sampling from probability distributions is a problem of significant importance in Statistics and Mac...
We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of...
The stochastic gradient Langevin Dynamics is one of the most fundamental algorithms to solve samplin...
Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used ...
We obtain quantitative bounds on the mixing properties of the Hamiltonian Monte Carlo (HMC) algorith...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
This thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in hi...
International audienceWe extend the Langevin Monte Carlo (LMC) algorithm to compactly supported meas...
Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inferen...
International audienceRecent studies on diffusion-based sampling methods have shown that Langevin Mo...
A well-known first-order method for sampling from log-concave probability distributions is the Unadj...
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We implement the simple method to accelerate the convergence speed to the steady state and enhance t...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We consider a family of unadjusted HMC samplers, which includes standard position HMC samplers and d...
Sampling from probability distributions is a problem of significant importance in Statistics and Mac...
We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of...
The stochastic gradient Langevin Dynamics is one of the most fundamental algorithms to solve samplin...
Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used ...
We obtain quantitative bounds on the mixing properties of the Hamiltonian Monte Carlo (HMC) algorith...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
This thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in hi...
International audienceWe extend the Langevin Monte Carlo (LMC) algorithm to compactly supported meas...
Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inferen...
International audienceRecent studies on diffusion-based sampling methods have shown that Langevin Mo...
A well-known first-order method for sampling from log-concave probability distributions is the Unadj...
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We implement the simple method to accelerate the convergence speed to the steady state and enhance t...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...