In limited data domains, many effective language modeling techniques construct models with parameters to be estimated on an in-domain development set. However, in some domains, no such data exist beyond the unlabeled test corpus. In this work, we explore the iterative use of the recognition hypotheses for unsupervised parameter estimation. We also evaluate the effectiveness of supervised adaptation using varying amounts of user-provided transcripts of utterances selected via multiple strategies. While unsupervised adaptation obtains 80% of the potential error reductions, it is outperformed by using only 300 words of user transcription. By transcribing the lowest confidence utterances first, we further obtain an effective word error rate red...
AbstractThis work addresses one of the common issues arising when building a speech recognition syst...
In a human-machine interaction (dialog) the statistical lan-guage variations are large among differe...
LREC2006: the 5th international conference on Language Resources and Evaluation, May 2006.This paper...
This paper presents a method for reducing the effort of transcribing user utterances to develop lang...
(Now with TEMIC SDS GmbH, Ulm, Germany). It has been demonstrated repeatedly that the acoustic model...
The use of the PC and Internet for placing telephone calls will present new opportunities to capture...
Automatic speech transcription systems are developed for various languages, domains,and applications...
Language modeling is critical and indispensable for many natural language ap-plications such as auto...
Stochastic n-gram language models have been successfully applied in continuous speech recognition fo...
Language modeling is an important part for both speech recognition and machine translation systems. ...
In this work, we tackle the problem of language and translation models domain-adaptation without exp...
In this work, we tackle the problem of language and translation models domain-adaptation without exp...
Despite the availability of better performing techniques, most language models are trained using pop...
Building a stochastic language model (LM) for speech recog-nition requires a large corpus of target ...
This paper proposes an unsupervised, batch-type, class-based language model adaptation method for s...
AbstractThis work addresses one of the common issues arising when building a speech recognition syst...
In a human-machine interaction (dialog) the statistical lan-guage variations are large among differe...
LREC2006: the 5th international conference on Language Resources and Evaluation, May 2006.This paper...
This paper presents a method for reducing the effort of transcribing user utterances to develop lang...
(Now with TEMIC SDS GmbH, Ulm, Germany). It has been demonstrated repeatedly that the acoustic model...
The use of the PC and Internet for placing telephone calls will present new opportunities to capture...
Automatic speech transcription systems are developed for various languages, domains,and applications...
Language modeling is critical and indispensable for many natural language ap-plications such as auto...
Stochastic n-gram language models have been successfully applied in continuous speech recognition fo...
Language modeling is an important part for both speech recognition and machine translation systems. ...
In this work, we tackle the problem of language and translation models domain-adaptation without exp...
In this work, we tackle the problem of language and translation models domain-adaptation without exp...
Despite the availability of better performing techniques, most language models are trained using pop...
Building a stochastic language model (LM) for speech recog-nition requires a large corpus of target ...
This paper proposes an unsupervised, batch-type, class-based language model adaptation method for s...
AbstractThis work addresses one of the common issues arising when building a speech recognition syst...
In a human-machine interaction (dialog) the statistical lan-guage variations are large among differe...
LREC2006: the 5th international conference on Language Resources and Evaluation, May 2006.This paper...