peer reviewedWe present a variational Expectation-Maximization algorithm to learn probabilistic mixture models. The algorithm is similar to Kohonen's Self-Organizing Map algorithm and not limited to Gaussian mixtures. We maximize the variational free-energy that sums data log-likelihood and Kullback-Leibler divergence between a normalized neighborhood function and the posterior distribution on the components, given data. We illustrate the algorithm with an application on word clustering
This paper presents methods to improve the probability density estimation in hidden Markov models fo...
In Chapter 1 we give a general introduction and motivate the need for clustering and dimension reduc...
We propose a new adaptive sampling method that uses Self-Organizing Maps (SOM). In SOM, densely samp...
University of AmsterdamWe present a variational Expectation-Maximization algorithm to learn proba- b...
peer reviewedWe present an expectation-maximization (EM) algorithm that yields topology preserving m...
A self-organizing mixture network (SOMN) is derived for learning arbitrary density functions. The ne...
Abstract—A self-organizing mixture network (SOMN) is derived for learning arbitrary density function...
A Bayesian self-organising map (BSOM) is proposed for learning mixtures of Gaussian distributions. I...
A completely unsupervised mixture distribution network, namely the self-organising mixture network, ...
The self-organizing mixture network (SOMN) is a learning algorithm for mixture densities, derived fr...
In this paper, we propose an extended self-organising learning scheme, in which both distance measur...
Nowadays, we observe a rapid growth of complex data in all formats due to the technological developm...
The Expectation-Maximization (EM) algorithm is a popular and convenient tool for the estimation of G...
Abstract—The Expectation Maximization (EM) algorithm is widely used for learning finite mixture mode...
This paper represents a preliminary (pre-reviewing) version of a sublinear variational algorithm for...
This paper presents methods to improve the probability density estimation in hidden Markov models fo...
In Chapter 1 we give a general introduction and motivate the need for clustering and dimension reduc...
We propose a new adaptive sampling method that uses Self-Organizing Maps (SOM). In SOM, densely samp...
University of AmsterdamWe present a variational Expectation-Maximization algorithm to learn proba- b...
peer reviewedWe present an expectation-maximization (EM) algorithm that yields topology preserving m...
A self-organizing mixture network (SOMN) is derived for learning arbitrary density functions. The ne...
Abstract—A self-organizing mixture network (SOMN) is derived for learning arbitrary density function...
A Bayesian self-organising map (BSOM) is proposed for learning mixtures of Gaussian distributions. I...
A completely unsupervised mixture distribution network, namely the self-organising mixture network, ...
The self-organizing mixture network (SOMN) is a learning algorithm for mixture densities, derived fr...
In this paper, we propose an extended self-organising learning scheme, in which both distance measur...
Nowadays, we observe a rapid growth of complex data in all formats due to the technological developm...
The Expectation-Maximization (EM) algorithm is a popular and convenient tool for the estimation of G...
Abstract—The Expectation Maximization (EM) algorithm is widely used for learning finite mixture mode...
This paper represents a preliminary (pre-reviewing) version of a sublinear variational algorithm for...
This paper presents methods to improve the probability density estimation in hidden Markov models fo...
In Chapter 1 we give a general introduction and motivate the need for clustering and dimension reduc...
We propose a new adaptive sampling method that uses Self-Organizing Maps (SOM). In SOM, densely samp...