The Jeffreys divergence is a renown arithmetic symmetrization of the oriented Kullback–Leibler divergence broadly used in information sciences. Since the Jeffreys divergence between Gaussian mixture models is not available in closed-form, various techniques with advantages and disadvantages have been proposed in the literature to either estimate, approximate, or lower and upper bound this divergence. In this paper, we propose a simple yet fast heuristic to approximate the Jeffreys divergence between two univariate Gaussian mixtures with arbitrary number of components. Our heuristic relies on converting the mixtures into pairs of dually parameterized probability densities belonging to an exponential-polynomial family. To measure with a close...
The study of mixture models constitutes a large domain of research in statistics. In the first part ...
L’étude des modèles de mélanges est un champ très vaste en statistique. Nous présentons dans la prem...
International audienceEstimators derived from the expectation‐maximization (EM) algorithm are not ro...
Abstract. Gaussian mixture models are a widespread tool for mod-eling various and complex probabilit...
We provide an algorithm for properly learning mixtures of two single-dimensional Gaussians with-out ...
Mixtures of gaussian (or normal) distributions arise in a variety of application areas. Many techniq...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
Mixtures of Gaussian (or normal) distributions arise in a variety of application areas. Many heurist...
Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate...
Information-theoretic measures, such as the entropy, the cross-entropy and the Kullback–Leibler dive...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
We consider the problem of identifying the parameters of an unknown mixture of two arbi-trary d-dime...
International audienceThe Kullback-Leibler divergence is a widespread dis-similarity measure between...
There are many applications that benefit from computing the exact divergence between 2 discrete prob...
There are many applications that benefit from computing the exact divergence between 2 discrete prob...
The study of mixture models constitutes a large domain of research in statistics. In the first part ...
L’étude des modèles de mélanges est un champ très vaste en statistique. Nous présentons dans la prem...
International audienceEstimators derived from the expectation‐maximization (EM) algorithm are not ro...
Abstract. Gaussian mixture models are a widespread tool for mod-eling various and complex probabilit...
We provide an algorithm for properly learning mixtures of two single-dimensional Gaussians with-out ...
Mixtures of gaussian (or normal) distributions arise in a variety of application areas. Many techniq...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
Mixtures of Gaussian (or normal) distributions arise in a variety of application areas. Many heurist...
Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate...
Information-theoretic measures, such as the entropy, the cross-entropy and the Kullback–Leibler dive...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
We consider the problem of identifying the parameters of an unknown mixture of two arbi-trary d-dime...
International audienceThe Kullback-Leibler divergence is a widespread dis-similarity measure between...
There are many applications that benefit from computing the exact divergence between 2 discrete prob...
There are many applications that benefit from computing the exact divergence between 2 discrete prob...
The study of mixture models constitutes a large domain of research in statistics. In the first part ...
L’étude des modèles de mélanges est un champ très vaste en statistique. Nous présentons dans la prem...
International audienceEstimators derived from the expectation‐maximization (EM) algorithm are not ro...