The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech recognition (ASR) tremendously. On a wide range of ASR tasks, DNN models show superior performance than the traditional Gaussian mix-ture models (GMMs). Although making significant advances, DNN models still suffer from data scarcity, speaker mismatch and environment variability. This thesis resolves these challenges by fully exploiting DNNs ’ ability of integrating heteroge-neous features under the same optimization objective. We propose to improve DNN models under these challenging conditions by incorporating context information into DNN training. On a new language, the amount of training data may become highly limited. This data scarcity caus...
We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-end...
Conventional speech recognition systems consist of feature extraction, acoustic and language modelin...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...
The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech rec...
Automatic speech recognition (ASR) is a key core technology for the information age. ASR systems hav...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
When deployed in automated speech recognition (ASR), deep neural networks (DNNs) can be treated as a...
Manual transcription of audio databases for the development of automatic speech recognition (ASR) sy...
AbstractIn this work, we present a comprehensive study on the use of deep neural networks (DNNs) for...
Recently, deep neural networks (DNNs) have outperformed traditional acoustic models on a variety of ...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...
The development of a speech recognition system requires at least three resources: a large labeled sp...
Speaker adaptive training (SAT) is a well studied technique for Gaussian mixture acoustic models (GM...
Speaker adaptive training (SAT) is a well studied technique for Gaussian mixture acoustic models (GM...
A defining problem in spoken language identification (LID) is how to design effective representation...
We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-end...
Conventional speech recognition systems consist of feature extraction, acoustic and language modelin...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...
The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech rec...
Automatic speech recognition (ASR) is a key core technology for the information age. ASR systems hav...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
When deployed in automated speech recognition (ASR), deep neural networks (DNNs) can be treated as a...
Manual transcription of audio databases for the development of automatic speech recognition (ASR) sy...
AbstractIn this work, we present a comprehensive study on the use of deep neural networks (DNNs) for...
Recently, deep neural networks (DNNs) have outperformed traditional acoustic models on a variety of ...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...
The development of a speech recognition system requires at least three resources: a large labeled sp...
Speaker adaptive training (SAT) is a well studied technique for Gaussian mixture acoustic models (GM...
Speaker adaptive training (SAT) is a well studied technique for Gaussian mixture acoustic models (GM...
A defining problem in spoken language identification (LID) is how to design effective representation...
We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-end...
Conventional speech recognition systems consist of feature extraction, acoustic and language modelin...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...