We address the problem of learning the structure of Gaussian graphical models for use in automatic speech recognition, a means of controlling the form of the inverse covariance matrices of such systems. With particular focus on data sparsity issues, we implement a method for imposing graphical model structure on a Gaussian mixture system, using a convex optimisation technique to maximise a penalised likelihood expression. The results of initial experiments on a phone recognition task show a performance improvement over an equivalent full-covariance system
Abstract — Gaussian Mixture Models (GMMs) are commonly used as the output density function for large...
Graphical models provide a promising paradigm to study both existing and novel techniques for automa...
In this paper, we study acoustic modeling for speech recognition using mixtures of exponential model...
HMM-based systems for Automatic Speech Recognition typically model the acoustic features using mixt...
We propose to use sparse inverse covariance matrices for acoustic model training when there is insuf...
U ovoj disertaciji je predstavljen model koji aproksimira pune kova- rijansne matrice u modelu gauso...
Most HMM-based speech recognition systems use Gaussian mixtures as observation probability density f...
We consider the problem of estimating a sparse dynamic Gaussian graphical model with L1 penalized ma...
One of the fundamental tasks in science is to find explainable relationships between observed pheno...
Abstract. This paper considers the problem of networks reconstruction from heterogeneous data using ...
In most of state-of-the-art speech recognition systems, Gaussian mixture models (GMMs) are used to ...
Speech recognition systems typically use mixtures of diagonal Gaussians to model the acoustics. Usin...
Most automatic speech recognition (ASR) systems express probability densities over sequences of acou...
An estimation of parameters of a multivariate Gaussian Mixture Model is usually based on a criterion...
In this paper, we consider the problem of learning Gaussian multiresolution (MR) models in which dat...
Abstract — Gaussian Mixture Models (GMMs) are commonly used as the output density function for large...
Graphical models provide a promising paradigm to study both existing and novel techniques for automa...
In this paper, we study acoustic modeling for speech recognition using mixtures of exponential model...
HMM-based systems for Automatic Speech Recognition typically model the acoustic features using mixt...
We propose to use sparse inverse covariance matrices for acoustic model training when there is insuf...
U ovoj disertaciji je predstavljen model koji aproksimira pune kova- rijansne matrice u modelu gauso...
Most HMM-based speech recognition systems use Gaussian mixtures as observation probability density f...
We consider the problem of estimating a sparse dynamic Gaussian graphical model with L1 penalized ma...
One of the fundamental tasks in science is to find explainable relationships between observed pheno...
Abstract. This paper considers the problem of networks reconstruction from heterogeneous data using ...
In most of state-of-the-art speech recognition systems, Gaussian mixture models (GMMs) are used to ...
Speech recognition systems typically use mixtures of diagonal Gaussians to model the acoustics. Usin...
Most automatic speech recognition (ASR) systems express probability densities over sequences of acou...
An estimation of parameters of a multivariate Gaussian Mixture Model is usually based on a criterion...
In this paper, we consider the problem of learning Gaussian multiresolution (MR) models in which dat...
Abstract — Gaussian Mixture Models (GMMs) are commonly used as the output density function for large...
Graphical models provide a promising paradigm to study both existing and novel techniques for automa...
In this paper, we study acoustic modeling for speech recognition using mixtures of exponential model...