The object function for Boosting training method in acoustic modeling aims to reduce utterance level error rate. This is different from the most commonly used performance metric in speech recognition, word error rate. This paper proposes that the combination of N-best list re-ranking and ROVER can partly address this problem. In particular, model combination is applied to re-ranked hypotheses rather than to the original top-1 hypotheses and carried on word level. Improvement of system performance is observed in our experiments. In addition, we describe and evaluate a new confidence feature that measures the correctness of frame level decoding result. </p
Boosting approaches are based on the idea that high-quality learning algorithms can be formed by rep...
This paper proposes to use Word Confi-dence Estimation (WCE) information to improve MT outputs via N...
This paper compares the performance of Boosting and non-Boosting training algorithms in large vocabu...
Conventional Boosting algorithms for acoustic modeling have two notable weaknesses. (1) The objectiv...
This paper investigates two important issues in constructing and combining ensembles of acoustic mo...
This paper compares the performance of Boosting and nonBoosting training algorithms in large vocabu...
This paper investigates two important issues in constructing and combining ensembles of acoustic mod...
We apply boosting techniques to the problem of word error rate minimisation in speech recognition. T...
This paper describes our work on applying ensembles of acoustic models to the problem of large voca...
We propose a hypothesis reordering technique to improve speech recognition accuracy in a dialog syst...
We apply boosting techniques to the problem of word error rate minimisation in speech recognition. ...
We propose a hypothesis reordering technique to improve speech recognition accuracy in a dialog syst...
This paper is an empirical study on the performance of different discriminative approaches to rerank...
International audienceWe study the use of morphosyntactic knowledge to process N-best lists. We prop...
We propose a simple yet effective method for improving speech recognition by reranking the N-best sp...
Boosting approaches are based on the idea that high-quality learning algorithms can be formed by rep...
This paper proposes to use Word Confi-dence Estimation (WCE) information to improve MT outputs via N...
This paper compares the performance of Boosting and non-Boosting training algorithms in large vocabu...
Conventional Boosting algorithms for acoustic modeling have two notable weaknesses. (1) The objectiv...
This paper investigates two important issues in constructing and combining ensembles of acoustic mo...
This paper compares the performance of Boosting and nonBoosting training algorithms in large vocabu...
This paper investigates two important issues in constructing and combining ensembles of acoustic mod...
We apply boosting techniques to the problem of word error rate minimisation in speech recognition. T...
This paper describes our work on applying ensembles of acoustic models to the problem of large voca...
We propose a hypothesis reordering technique to improve speech recognition accuracy in a dialog syst...
We apply boosting techniques to the problem of word error rate minimisation in speech recognition. ...
We propose a hypothesis reordering technique to improve speech recognition accuracy in a dialog syst...
This paper is an empirical study on the performance of different discriminative approaches to rerank...
International audienceWe study the use of morphosyntactic knowledge to process N-best lists. We prop...
We propose a simple yet effective method for improving speech recognition by reranking the N-best sp...
Boosting approaches are based on the idea that high-quality learning algorithms can be formed by rep...
This paper proposes to use Word Confi-dence Estimation (WCE) information to improve MT outputs via N...
This paper compares the performance of Boosting and non-Boosting training algorithms in large vocabu...