Stochastic variational inference is a promising method for fitting large-scale probabilistic models with hidden structures. Different from traditional stochastic learning, stochastic variational inference uses the natural gradient, which is particularly efficient for computing probabilistic distributions. One of the issues in stochastic variational inference is to set an appropriate learning rate. Inspired by a recent approach for setting the learning rate for stochastic learning (Schaul et al., 2012), we present a strategy for setting the learning rate for stochastic variational inference and demonstrate it is effective in learning large-scale complex models.
Variational inference algorithms have proven successful for Bayesian analysis in large data settings...
This is the final version of the article. It first appeared from Neural Information Processing Syste...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
<p>Stochastic variational inference finds good posterior approximations of probabilistic models with...
Stochastic variational inference finds good posterior approximations of probabilistic models with ve...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Abstract Stochastic variational inference makes it possible to approximate posterior distributions i...
Stochastic variational inference (SVI) uses stochastic optimization to scale up Bayesian computation...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
The performance of stochastic gradient de-scent (SGD) depends critically on how learn-ing rates are ...
The field of statistical machine learning has seen a rapid progress in complex hierarchical Bayesian...
Stochastic variational inference makes it possible to approximate posterior distributions induced by...
Variational inference approximates the posterior distribution of a probabilistic model with a parame...
Probabilistic inference is at the core of many recent advances in machine learning. Unfortunately, ...
Variational inference algorithms have proven successful for Bayesian analysis in large data settings...
This is the final version of the article. It first appeared from Neural Information Processing Syste...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
<p>Stochastic variational inference finds good posterior approximations of probabilistic models with...
Stochastic variational inference finds good posterior approximations of probabilistic models with ve...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Abstract Stochastic variational inference makes it possible to approximate posterior distributions i...
Stochastic variational inference (SVI) uses stochastic optimization to scale up Bayesian computation...
How can we perform efficient inference and learning in directed probabilistic models, in the presenc...
Variational inference is one of the tools that now lies at the heart of the modern data analysis lif...
The performance of stochastic gradient de-scent (SGD) depends critically on how learn-ing rates are ...
The field of statistical machine learning has seen a rapid progress in complex hierarchical Bayesian...
Stochastic variational inference makes it possible to approximate posterior distributions induced by...
Variational inference approximates the posterior distribution of a probabilistic model with a parame...
Probabilistic inference is at the core of many recent advances in machine learning. Unfortunately, ...
Variational inference algorithms have proven successful for Bayesian analysis in large data settings...
This is the final version of the article. It first appeared from Neural Information Processing Syste...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...