Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the a...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Stochastic gradient Markov chain Monte Carlo (SGMCMC) is a popular class of algorithms for scalable ...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing...
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. A popu...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
<p>Stochastic gradient optimization is a class of widely used algorithms for training machine learni...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Stochastic gradient Markov chain Monte Carlo (SGMCMC) is a popular class of algorithms for scalable ...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing...
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. A popu...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
Year after years, the amount of data that we continuously generate is increasing. When this situatio...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
<p>Stochastic gradient optimization is a class of widely used algorithms for training machine learni...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally e...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Stochastic gradient Markov chain Monte Carlo (SGMCMC) is a popular class of algorithms for scalable ...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...