We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary con-tinuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient training data for acoustic mod-eling in these scenarios, we use transcribed multilingual data and semi-supervised training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute im-provement of 16 % in a low-resource LVCSR setting with only one hour of in-domain training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-supervised training. Index Terms — Low resource, speech recognition, deep neural networks, semi-...
While Deep Neural Networks (DNNs) have achieved tremen-dous success for large vocabulary continuous ...
Manual transcription of audio databases for automatic speech recognition (ASR) training is a costly ...
<p>For resource rich languages, recent works have shown Neural Network based Language Models (NNLMs)...
We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-end...
We investigate two strategies to improve the context-dependent deep neural network hidden Markov mod...
We investigate two strategies to improve the context-dependent deep neural network hidden Markov mod...
In this paper we propose a Shared Hidden Layer Multi-softmax Deep Neural Network (SHL-MDNN) approach...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...
The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech rec...
As a feed-forward architecture, the recently proposed maxout networks integrate dropout naturally an...
The development of a speech recognition system requires at least three resources: a large labeled sp...
International audienceMost state-of-the-art speech systems use deep neural networks (DNNs). These sy...
International audienceMost state-of-the-art speech systems use deep neural networks (DNNs). These sy...
The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech rec...
<p>As a feed-forward architecture, the recently proposed maxout networks integrate dropout naturally...
While Deep Neural Networks (DNNs) have achieved tremen-dous success for large vocabulary continuous ...
Manual transcription of audio databases for automatic speech recognition (ASR) training is a costly ...
<p>For resource rich languages, recent works have shown Neural Network based Language Models (NNLMs)...
We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-end...
We investigate two strategies to improve the context-dependent deep neural network hidden Markov mod...
We investigate two strategies to improve the context-dependent deep neural network hidden Markov mod...
In this paper we propose a Shared Hidden Layer Multi-softmax Deep Neural Network (SHL-MDNN) approach...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...
The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech rec...
As a feed-forward architecture, the recently proposed maxout networks integrate dropout naturally an...
The development of a speech recognition system requires at least three resources: a large labeled sp...
International audienceMost state-of-the-art speech systems use deep neural networks (DNNs). These sy...
International audienceMost state-of-the-art speech systems use deep neural networks (DNNs). These sy...
The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech rec...
<p>As a feed-forward architecture, the recently proposed maxout networks integrate dropout naturally...
While Deep Neural Networks (DNNs) have achieved tremen-dous success for large vocabulary continuous ...
Manual transcription of audio databases for automatic speech recognition (ASR) training is a costly ...
<p>For resource rich languages, recent works have shown Neural Network based Language Models (NNLMs)...