Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast but noisy gradient estimates to enable large-scale posterior sampling. Although we can easily extend SGLD to distributed settings, it suffers from two issues when applied to federated non-IID data. First, the variance of these estimates increases significantly. Second, delaying communication causes the Markov chains to diverge from the true posterior even for very simple models. To alleviate both these problems, we propose conducive gradients, a simple mechanism that combines local likelihood approximations to correct gradient updates. Notably, conducive gradients are easy to compute, and since we only calculate the approximations once, they ...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the M...