Bayesian inference tells us how we can incorporate information from the data into the parameters. In practice, this can be carried out using Markov Chain Monte Carlo (MCMC) methods which draw approximate samples from the posterior distribution, but using them for complex models like neural networks remains challenging. The most commonly used methods in these cases are Stochastic Gradient Markov Chain Monte Carlo (SGMCMC) methods based on mini-batches. This thesis presents improvements for this family of algorithms. We focus on the specific algorithm of Stochastic Gradient Riemannian Langevin Dynamics (SGRLD). The core idea of it is to perform sampling on a suitably defined Riemannian manifold characterized by a Riemannain metric, which allo...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
AbstractIn this technical note, we derive two MCMC (Markov chain Monte Carlo) samplers for dynamic c...
One of the enduring challenges in Markov chain Monte Carlo methodology is the development of proposa...
Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks...
The efficiency of Markov Chain Monte Carlo (MCMC) depends on how the underlying geometry of the prob...
This thesis presents novel Markov chain Monte Carlo methodology that exploits the natural representa...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
The efficiency of Markov Chain Monte Carlo (MCMC) depends on how the underlying geometry of the prob...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
Effective training of deep neural networks suffers from two main issues. The first is that the param...
The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
The paper proposes a Riemannian Manifold Hamiltonian Monte Carlo sampler to resolve the shortcomings...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
AbstractIn this technical note, we derive two MCMC (Markov chain Monte Carlo) samplers for dynamic c...
One of the enduring challenges in Markov chain Monte Carlo methodology is the development of proposa...
Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks...
The efficiency of Markov Chain Monte Carlo (MCMC) depends on how the underlying geometry of the prob...
This thesis presents novel Markov chain Monte Carlo methodology that exploits the natural representa...
International audienceStochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC algorit...
The efficiency of Markov Chain Monte Carlo (MCMC) depends on how the underlying geometry of the prob...
Despite the powerful advantages of Bayesian inference such as quantifying uncertainty, ac- curate av...
Effective training of deep neural networks suffers from two main issues. The first is that the param...
The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined...
Best Paper AwardInternational audienceOne way to avoid overfitting in machine learning is to use mod...
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called ...
Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), employ fast ...
The paper proposes a Riemannian Manifold Hamiltonian Monte Carlo sampler to resolve the shortcomings...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
AbstractIn this technical note, we derive two MCMC (Markov chain Monte Carlo) samplers for dynamic c...
One of the enduring challenges in Markov chain Monte Carlo methodology is the development of proposa...