It is shown here that several techniques for masimum likelihood training of Hidden Markov Models are instances of the EM algorithm and have very similar descriptions when formulated as instances of the Alternating Minimization procedure. The N-Best and Segmental K-Means algorithms are derived under a minimum discrimination information criterion and are shown to result from an additional restriction placed on the minimum discrimination information formulation which yields the Baum Welch algorithm. This uniform formulation is employed in an exploration of generalization by the EM algorithm.It has been noted that the EM algorithm can introduce artifacts as training progresses. A related phenomenon is that over-training can occur; although the ...
We present new algorithms for parameter estimation of HMMs. By adapting a framework used for supervi...
This paper proposes an iterative natural gradient algorithm to perform the optimization of switching...
This paper attempts to overcome the local convergence problem of the Expectation Maximization (EM) b...
This paper presents a brief comparison of two information geometries as they are used to describe th...
In this paper we investigate the performance of penalized variants of the forwards-backwards algorit...
We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional ...
We present a framework for learning in hidden Markov models with distributed state representations...
We present a learning algorithm for hidden Markov models with continuous state and observation space...
AbstractHidden Markov models assume a sequence of random variables to be conditionally independent g...
We present a learning algorithm for hidden Markov models with continuous state and observa-tion spac...
International audienceThis paper addresses the problem of Hidden Markov Models (HMM) training and in...
In this chapter, we consider the issue of Hidden Markov Model (HMM) training. First, HMMs are introd...
In this paper, a novel learning algorithm for Hidden Markov Models (HMMs) has been devised. The key ...
Hidden Markov models assume a sequence of random variables to be conditionally independent given a s...
The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). ...
We present new algorithms for parameter estimation of HMMs. By adapting a framework used for supervi...
This paper proposes an iterative natural gradient algorithm to perform the optimization of switching...
This paper attempts to overcome the local convergence problem of the Expectation Maximization (EM) b...
This paper presents a brief comparison of two information geometries as they are used to describe th...
In this paper we investigate the performance of penalized variants of the forwards-backwards algorit...
We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional ...
We present a framework for learning in hidden Markov models with distributed state representations...
We present a learning algorithm for hidden Markov models with continuous state and observation space...
AbstractHidden Markov models assume a sequence of random variables to be conditionally independent g...
We present a learning algorithm for hidden Markov models with continuous state and observa-tion spac...
International audienceThis paper addresses the problem of Hidden Markov Models (HMM) training and in...
In this chapter, we consider the issue of Hidden Markov Model (HMM) training. First, HMMs are introd...
In this paper, a novel learning algorithm for Hidden Markov Models (HMMs) has been devised. The key ...
Hidden Markov models assume a sequence of random variables to be conditionally independent given a s...
The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). ...
We present new algorithms for parameter estimation of HMMs. By adapting a framework used for supervi...
This paper proposes an iterative natural gradient algorithm to perform the optimization of switching...
This paper attempts to overcome the local convergence problem of the Expectation Maximization (EM) b...