This paper addresses the problem of iterative optimization of the Kullback-Leibler (KL) divergence on discrete (finite) probability spaces. Traditionally, the problem is formulated in the constrained optimization framework and is tackled by gradient like methods. Here, it is shown that performing the KL optimization in a Riemannian space equipped with the Fisher metric provides three major advantages over the standard methods: 1. The Fisher metric turns the original constrained optimization into an unconstrained optimization problem; 2. The optimization using a Fisher metric behaves asymptotically as a Newton method and shows very fast convergence near the optimum; 3. The Fisher metric is an intrinsic property of the space of probability di...
We propose a new variational inference method based on a proximal framework that uses the Kullback-L...
This article proposes the exploitation of the Kullback–Leibler divergence to characterise the uncert...
In a variety of applications it is important to extract information from a probability measure $\mu$...
Motivated by the computation of the non-parametric maximum likelihood estimator (NPMLE) and the Baye...
In this paper we study algorithms to find a Gaussian approximation to a target measure defined on a ...
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired on...
This paper considers the distributionally robust chance constrained Markov decision process with ran...
The study of robustness has received much attention due to its inevitability in data-driven settings...
Evolutionary algorithms perform optimization using a population of sample solution points. An intere...
International audienceWe consider a Kullback-Leibler-based algorithm for the stochastic multi-armed ...
The Kullback-Leibler Divergence of gene distributions between successive generations of the Extende...
In this paper a novel generalised fully probabilistic controller design for the minimisation of the ...
The likelihood function is a fundamental component in Bayesian statistics. However, evaluating the l...
International audienceConvex optimization problems involving information measures have been extensiv...
In this paper we study algorithms to find a Gaussian approximation to a target measure defined on a ...
We propose a new variational inference method based on a proximal framework that uses the Kullback-L...
This article proposes the exploitation of the Kullback–Leibler divergence to characterise the uncert...
In a variety of applications it is important to extract information from a probability measure $\mu$...
Motivated by the computation of the non-parametric maximum likelihood estimator (NPMLE) and the Baye...
In this paper we study algorithms to find a Gaussian approximation to a target measure defined on a ...
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired on...
This paper considers the distributionally robust chance constrained Markov decision process with ran...
The study of robustness has received much attention due to its inevitability in data-driven settings...
Evolutionary algorithms perform optimization using a population of sample solution points. An intere...
International audienceWe consider a Kullback-Leibler-based algorithm for the stochastic multi-armed ...
The Kullback-Leibler Divergence of gene distributions between successive generations of the Extende...
In this paper a novel generalised fully probabilistic controller design for the minimisation of the ...
The likelihood function is a fundamental component in Bayesian statistics. However, evaluating the l...
International audienceConvex optimization problems involving information measures have been extensiv...
In this paper we study algorithms to find a Gaussian approximation to a target measure defined on a ...
We propose a new variational inference method based on a proximal framework that uses the Kullback-L...
This article proposes the exploitation of the Kullback–Leibler divergence to characterise the uncert...
In a variety of applications it is important to extract information from a probability measure $\mu$...