In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools i...
In this paper, we present a general algorithmic framework based on WFSTs for implementing a variety ...
State-of-the-art computer-assisted transla-tion engines are based on a statistical pre-diction engin...
Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-...
In speech recognition systems language model (LMs) are often constructed by training and combining m...
We survey the use of weighted finite-state transducers (WFSTs) in speech recognition. We show that W...
This paper describes a new approach to language model adaptation for speech recognition based on the...
Standard speaker adaptation algorithms perform poorly on dysarthric speech because of the limited ph...
Language models (LMs) are often constructed by building multiple individual component models that ar...
We survey the use of weighted nitestate transducers WFSTs in speech recognition We show that WFSTs...
Building a stochastic language model (LM) for speech recog-nition requires a large corpus of target ...
This paper addresses issues in part of speech disambiguation using finite-state transducers and pres...
Abstract—In this paper, we present a novel version of discriminative training for N-gram language mo...
Colloque avec actes et comité de lecture.This paper will focus on the conceptual and technical desig...
In Spoken Language Understanding (SLU) the task is to extract important information from audio comma...
Abstract—Text corpus size is an important issue when building a language model (LM) in particular wh...
In this paper, we present a general algorithmic framework based on WFSTs for implementing a variety ...
State-of-the-art computer-assisted transla-tion engines are based on a statistical pre-diction engin...
Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-...
In speech recognition systems language model (LMs) are often constructed by training and combining m...
We survey the use of weighted finite-state transducers (WFSTs) in speech recognition. We show that W...
This paper describes a new approach to language model adaptation for speech recognition based on the...
Standard speaker adaptation algorithms perform poorly on dysarthric speech because of the limited ph...
Language models (LMs) are often constructed by building multiple individual component models that ar...
We survey the use of weighted nitestate transducers WFSTs in speech recognition We show that WFSTs...
Building a stochastic language model (LM) for speech recog-nition requires a large corpus of target ...
This paper addresses issues in part of speech disambiguation using finite-state transducers and pres...
Abstract—In this paper, we present a novel version of discriminative training for N-gram language mo...
Colloque avec actes et comité de lecture.This paper will focus on the conceptual and technical desig...
In Spoken Language Understanding (SLU) the task is to extract important information from audio comma...
Abstract—Text corpus size is an important issue when building a language model (LM) in particular wh...
In this paper, we present a general algorithmic framework based on WFSTs for implementing a variety ...
State-of-the-art computer-assisted transla-tion engines are based on a statistical pre-diction engin...
Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-...