We investigate multilingual modeling in the context of a deep neural network (DNN) – hidden Markov model (HMM) hy-brid, where the DNN outputs are used as the HMM state like-lihoods. By viewing neural networks as a cascade of fea-ture extractors followed by a logistic regression classifier, we hypothesise that the hidden layers, which act as feature ex-tractors, will be transferable between languages. As a corol-lary, we propose that training the hidden layers on multiple languages makes them more suitable for such cross-lingual transfer. We experimentally confirm these hypotheses on the GlobalPhone corpus using seven languages from three dif-ferent language families: Germanic, Romance, and Slavic. The experiments demonstrate substantial im...
Recently there has been a lot of interest in neural network based language models. These models typi...
In this work we investigate the usage of TV audio data for cross-language training of multi-lingual...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...
This paper presents a study on multilingual deep neural net-work (DNN) based acoustic modeling and i...
In this work, we propose several deep neural network architectures that are able to leverage data fr...
Deep neural network (DNN) acoustic models can be adapted to under-resourced languages by transferrin...
Different training and adaptation techniques for multilingual Automatic Speech Recognition (ASR) are...
We describe a novel way to implement subword language models in speech recognition systems based on ...
Multilingual speech recognition systems mostly benefit low resource languages but suffer degradation...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...
Multilingual speech recognition has drawn significant attention as an effective way to compensate da...
Exploiting cross-lingual resources is an effective way to compensate for data scarcity of low resour...
We investigate two strategies to improve the context-dependent deep neural network hidden Markov mod...
Posterior-based or bottleneck features derived from neural net-works trained on out-of-domain data m...
This work deals with non-native children’s speech and investigates both multi-task and t...
Recently there has been a lot of interest in neural network based language models. These models typi...
In this work we investigate the usage of TV audio data for cross-language training of multi-lingual...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...
This paper presents a study on multilingual deep neural net-work (DNN) based acoustic modeling and i...
In this work, we propose several deep neural network architectures that are able to leverage data fr...
Deep neural network (DNN) acoustic models can be adapted to under-resourced languages by transferrin...
Different training and adaptation techniques for multilingual Automatic Speech Recognition (ASR) are...
We describe a novel way to implement subword language models in speech recognition systems based on ...
Multilingual speech recognition systems mostly benefit low resource languages but suffer degradation...
<p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov ...
Multilingual speech recognition has drawn significant attention as an effective way to compensate da...
Exploiting cross-lingual resources is an effective way to compensate for data scarcity of low resour...
We investigate two strategies to improve the context-dependent deep neural network hidden Markov mod...
Posterior-based or bottleneck features derived from neural net-works trained on out-of-domain data m...
This work deals with non-native children’s speech and investigates both multi-task and t...
Recently there has been a lot of interest in neural network based language models. These models typi...
In this work we investigate the usage of TV audio data for cross-language training of multi-lingual...
© 2014 IEEE. Deep neural networks (DNNs) have shown a great promise in exploiting out-of-language da...