Although multiple cues, such as different signal processing techniques and feature representations, have been used in speech recognition in adverse acoustic environment, how to maximally utilize the benefit of these cues is largely unsolved. In this paper, a novel search strategy is proposed. During parallel decoding of different feature streams, the intermediate outputs are cross-referenced to reduce pruning errors. Experiment results show this method significantly improved recognition performance on a noisy large vocabulary continuous speech task
The control of continuous speech recognition by a context-free based language model requires a parsi...
Phonotactic language recognizers are based on the ability of phone decoders to produce phone sequenc...
The control of continuous speech recognition by a context-free based language model requires a parsi...
In speech recognition systems, information from multiple sources such as different feature streams c...
In large vocabulary continuous speech recognizers the search space needs to be constrained efficient...
In this paper, we present a novel, efficient search strategy for large vocabulary continuous speech ...
International audienceCombining automatic speech recognition (ASR) systems generally relies on the p...
Despite sophisticated present day automatic speech recognition (ASR) techniques, a single recognizer...
To improve speech recognition performance, a combination between TANDEM and bottleneck Deep Neural N...
Multichannel fusion strategies are presented for the distributed microphone recognition environment,...
International audienceThe combination of Automatic Speech Recognition (ASR) systems generally relies...
Multichannel fusion strategies are presented for the distributed microphone recognition environment,...
Analysis of data on human auditory processing suggests machine recognition paradigm, in which parall...
This article further develops and analyses the large vocabulary continuous speech recognition (LVCSR...
This work further develops and analyses the large vocabulary continuous speech recognition (LVCSR) s...
The control of continuous speech recognition by a context-free based language model requires a parsi...
Phonotactic language recognizers are based on the ability of phone decoders to produce phone sequenc...
The control of continuous speech recognition by a context-free based language model requires a parsi...
In speech recognition systems, information from multiple sources such as different feature streams c...
In large vocabulary continuous speech recognizers the search space needs to be constrained efficient...
In this paper, we present a novel, efficient search strategy for large vocabulary continuous speech ...
International audienceCombining automatic speech recognition (ASR) systems generally relies on the p...
Despite sophisticated present day automatic speech recognition (ASR) techniques, a single recognizer...
To improve speech recognition performance, a combination between TANDEM and bottleneck Deep Neural N...
Multichannel fusion strategies are presented for the distributed microphone recognition environment,...
International audienceThe combination of Automatic Speech Recognition (ASR) systems generally relies...
Multichannel fusion strategies are presented for the distributed microphone recognition environment,...
Analysis of data on human auditory processing suggests machine recognition paradigm, in which parall...
This article further develops and analyses the large vocabulary continuous speech recognition (LVCSR...
This work further develops and analyses the large vocabulary continuous speech recognition (LVCSR) s...
The control of continuous speech recognition by a context-free based language model requires a parsi...
Phonotactic language recognizers are based on the ability of phone decoders to produce phone sequenc...
The control of continuous speech recognition by a context-free based language model requires a parsi...