The EM algorithm is one of the most popular statistical learning algorithms. Unfortunately, it is a batch learning method. For large data sets and real-time systems, we need to develop on-line methods. In this thesis, we present a comprehensive study of on-line EM algorithms. We use Bayesian theory to propose a new on-line EM algorithm for multinomial mixtures. Based on this theory, we show that there is a direct connection between the setting of Bayes priors and the so-called learning rates of stochastic approximation algorithms, such as on-line EM and quasi-Bayes . Finally, we present extensive simulations, comparisons and parameter sensitivity studies on both synthetic data and documents with text, images and music.Science, Faculty ofCom...
We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian i...
Machine learning has reached a point where many probabilistic methods can be understood as variation...
International audienceModern statistical and machine learning settings often involve high data volum...
The EM algorithm is one of the most popular statistical learning algorithms. Unfortunately, it is a ...
Bayesian methods are often optimal, yet increasing pressure for fast computations, especially with s...
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture ...
The EM algorithm is used for many applications including Boltzmann machine, stochastic Perceptron an...
We show that there are strong relationships between approaches to optmization and learning based on ...
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture...
International audience<p>In this contribution, new online EM algorithms are proposedto perform infer...
Computational constraints often limit the practical applicability of coherent Bayes solutions to uns...
The only single-source--now completely updated and revised--to offer a unified treatment of the theo...
Inference is a key component in learning probabilistic models from partially observable data. When l...
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many appli...
We compare three approaches to learning numerical parameters of Bayesian networks from continuous da...
We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian i...
Machine learning has reached a point where many probabilistic methods can be understood as variation...
International audienceModern statistical and machine learning settings often involve high data volum...
The EM algorithm is one of the most popular statistical learning algorithms. Unfortunately, it is a ...
Bayesian methods are often optimal, yet increasing pressure for fast computations, especially with s...
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture ...
The EM algorithm is used for many applications including Boltzmann machine, stochastic Perceptron an...
We show that there are strong relationships between approaches to optmization and learning based on ...
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture...
International audience<p>In this contribution, new online EM algorithms are proposedto perform infer...
Computational constraints often limit the practical applicability of coherent Bayes solutions to uns...
The only single-source--now completely updated and revised--to offer a unified treatment of the theo...
Inference is a key component in learning probabilistic models from partially observable data. When l...
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many appli...
We compare three approaches to learning numerical parameters of Bayesian networks from continuous da...
We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian i...
Machine learning has reached a point where many probabilistic methods can be understood as variation...
International audienceModern statistical and machine learning settings often involve high data volum...