The stochastic gradient Langevin Dynamics is one of the most fundamental algorithms to solve sampling problems and non-convex optimization appearing in several machine learning applications. Especially, its variance reduced versions have nowadays gained particular attention. In this paper, we study two variants of this kind, namely, the Stochastic Variance Reduced Gradient Langevin Dynamics and the Stochastic Recursive Gradient Langevin Dynamics. We prove their convergence to the objective distribution in terms of KL-divergence under the sole assumptions of smoothness and Log-Sobolev inequality which are weaker conditions than those used in prior works for these algorithms. With the batch size and the inner loop length set to $\sqrt{n}$, th...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We establish a sharp uniform-in-time error estimate for the Stochastic Gradient Langevin Dynamics (S...
This thesis focuses on adaptive Stochastic Gradient Langevin Dynamics (SGLD) algorithms to solve opt...
We implement the simple method to accelerate the convergence speed to the steady state and enhance t...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Sampling from probability distributions is a problem of significant importance in Statistics and Mac...
International audienceIn this paper, we propose a novel reinforcement-learning algorithm consisting ...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
We establish generalization error bounds for stochastic gradient Langevin dynamics (SGLD) with const...
Modern machine learning models are complex, hierarchical, and large-scale and are trained using non-...
This thesis focuses on the problem of sampling in high dimension and is based on the unadjusted Lang...
<p>Stochastic gradient optimization is a class of widely used algorithms for training machine learni...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
We study the connections between optimization and sampling. In one direction, we study sampling algo...
We establish a sharp uniform-in-time error estimate for the Stochastic Gradient Langevin Dynamics (S...
This thesis focuses on adaptive Stochastic Gradient Langevin Dynamics (SGLD) algorithms to solve opt...
We implement the simple method to accelerate the convergence speed to the steady state and enhance t...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Sampling from probability distributions is a problem of significant importance in Statistics and Mac...
International audienceIn this paper, we propose a novel reinforcement-learning algorithm consisting ...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
We establish generalization error bounds for stochastic gradient Langevin dynamics (SGLD) with const...
Modern machine learning models are complex, hierarchical, and large-scale and are trained using non-...
This thesis focuses on the problem of sampling in high dimension and is based on the unadjusted Lang...
<p>Stochastic gradient optimization is a class of widely used algorithms for training machine learni...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
We study the connections between optimization and sampling. In one direction, we study sampling algo...