We compare EM, SEM, and MCMC algorithms to estimate the parameters of the Gaussian mixture model. We focus on problems in estimation arising from the likelihood function having a sharp ridge or saddle points. We use both synthetic and empirical data with those features. The comparison includes Bayesian approaches with different prior specifications and various procedures to deal with label switching. Although the solutions provided by these stochastic algorithms are more often degenerate, we conclude that SEM and MCMC may display faster convergence and improve the ability to locate the global maximum of the likelihood function
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture ...
Abstract: Normal mixture models provide the most popular framework for mod-elling heterogeneity in a...
The mixture model likelihood function is invariant with respect to permutation of the components of ...
We compare EM, SEM, and MCMC algorithms to estimate the parameters of the Gaussian mixture model. We...
In these notes, we present and review dierent methods based on maximum-likelihood estimation for lea...
We build up the mathematical connection between the "Expectation-Maximization" (EM) algori...
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a ...
Abstract. The EM algorithm for Gaussian mixture models often gets caught in local maxima of the like...
The mixture of Gaussian processes (MGP) is a powerful statistical learning model for regression and ...
This article considers a new approximation to the log-likelihood surface in mixture models. This app...
The EM algorithm is a familiar tool to get maximum likelihood parameter estimation in Gaussian mixtu...
Efficient probability density function estimation is of primary interest in statistics. A popular ap...
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a ...
It is well-known that the EM algorithm generally converges to a local maximum likelihood estimate. H...
The learning of variational inference can be widely seen as first estimating the class assignment va...
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture ...
Abstract: Normal mixture models provide the most popular framework for mod-elling heterogeneity in a...
The mixture model likelihood function is invariant with respect to permutation of the components of ...
We compare EM, SEM, and MCMC algorithms to estimate the parameters of the Gaussian mixture model. We...
In these notes, we present and review dierent methods based on maximum-likelihood estimation for lea...
We build up the mathematical connection between the "Expectation-Maximization" (EM) algori...
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a ...
Abstract. The EM algorithm for Gaussian mixture models often gets caught in local maxima of the like...
The mixture of Gaussian processes (MGP) is a powerful statistical learning model for regression and ...
This article considers a new approximation to the log-likelihood surface in mixture models. This app...
The EM algorithm is a familiar tool to get maximum likelihood parameter estimation in Gaussian mixtu...
Efficient probability density function estimation is of primary interest in statistics. A popular ap...
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a ...
It is well-known that the EM algorithm generally converges to a local maximum likelihood estimate. H...
The learning of variational inference can be widely seen as first estimating the class assignment va...
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture ...
Abstract: Normal mixture models provide the most popular framework for mod-elling heterogeneity in a...
The mixture model likelihood function is invariant with respect to permutation of the components of ...