International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorithm for Bayesian learning from large scale datasets. While SGLD with decreasing step sizes converges weakly to the posterior distribution, the algorithm is often used with a constant step size in practice and has demonstrated successes in machine learning tasks. The current practice is to set the step size inversely proportional to N where N is the number of training samples. As N becomes large, we show that the SGLD algorithm has an invariant probability measure which significantly departs from the target posterior and behaves like Stochastic Gradient Descent (SGD). This difference is inherently due to the high variance of the stochastic grad...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. A popu...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
In this paper we propose a new framework for learning from large scale datasets based on iterative l...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. A popu...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
In this paper we propose a new framework for learning from large scale datasets based on iterative l...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...