International audienceVariable selection is fundamental to high-dimensional statistical modeling, and is challenging in particular in unsupervised mod-eling, including mixture models. We propose a regularised maximum-likelihood inference of the Mixture of Experts model which is able to deal with potentially correlated features and encourages sparse models in a potentially high-dimensional scenarios. We develop a hybrid Expectation-Majorization-Maximization (EM/MM) algorithm for model fitting. Unlike state-of-the art regularised ML inference [1, 2], the proposed modeling doesn't require an approximate of the regularisation. The proposed algorithm allows to automatically obtain sparse solutions without thresholding, and includes coordinate de...
ICASSP Conference, 4 pages, 8 figuresExpectation-Maximization (EM) algorithm is a widely used iterat...
International audienceMixtures of von Mises-Fisher distributions can be used to cluster data on the ...
The well-known mixtures of experts(ME) model is usually trained by expectation maximization(EM) algo...
International audienceMixture of Experts (MoE) are successful models for modeling heterogeneous data...
International audienceMixture of experts (MoE) models are successful neural-network architectures fo...
International audience.We consider the Mixture of Experts (MoE) modeling for clustering heterogeneou...
Mixtures-of-Experts (MoE) are conditional mixture models that have shown their performance in modeli...
Mixtures-of-Experts models and their maximum likelihood estimation (MLE) via the EM algorithm have b...
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, toward...
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, toward...
The Expectation–Maximization (EM) algorithm is a popular tool in a wide variety of statistical setti...
Normal mixture models are widely used for statistical modeling of data, including cluster analysis. ...
Finite mixture models are being increasingly used to model the distributions of a wide variety of ra...
In this paper we consider both clustering and graphical modeling for given data. The clustering is t...
Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. Howe...
ICASSP Conference, 4 pages, 8 figuresExpectation-Maximization (EM) algorithm is a widely used iterat...
International audienceMixtures of von Mises-Fisher distributions can be used to cluster data on the ...
The well-known mixtures of experts(ME) model is usually trained by expectation maximization(EM) algo...
International audienceMixture of Experts (MoE) are successful models for modeling heterogeneous data...
International audienceMixture of experts (MoE) models are successful neural-network architectures fo...
International audience.We consider the Mixture of Experts (MoE) modeling for clustering heterogeneou...
Mixtures-of-Experts (MoE) are conditional mixture models that have shown their performance in modeli...
Mixtures-of-Experts models and their maximum likelihood estimation (MLE) via the EM algorithm have b...
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, toward...
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, toward...
The Expectation–Maximization (EM) algorithm is a popular tool in a wide variety of statistical setti...
Normal mixture models are widely used for statistical modeling of data, including cluster analysis. ...
Finite mixture models are being increasingly used to model the distributions of a wide variety of ra...
In this paper we consider both clustering and graphical modeling for given data. The clustering is t...
Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. Howe...
ICASSP Conference, 4 pages, 8 figuresExpectation-Maximization (EM) algorithm is a widely used iterat...
International audienceMixtures of von Mises-Fisher distributions can be used to cluster data on the ...
The well-known mixtures of experts(ME) model is usually trained by expectation maximization(EM) algo...