User interaction with voice-powered agents generates large amounts of unlabeled utterances. In this paper, we explore techniques to efficiently transfer the knowledge from these unlabeled utterances to improve model performance on Spoken Language Understanding (SLU) tasks. We use Embeddings from Language Model (ELMo) to take advantage of unlabeled data by learning contextualized word representations. Additionally, we propose ELMo-Light (ELMoL), a faster and simpler unsupervised pre-training method for SLU. Our findings suggest unsupervised pre-training on a large corpora of unlabeled utterances leads to significantly better SLU performance compared to training from scratch and it can even outperform conventional supervised transfer. Additio...
This paper addresses the problem of multi-domain spoken language understanding (SLU) where domain de...
In this article, we propose a simple yet effective approach to train an end-to-end speech recognitio...
Models for statistical spoken language understanding (SLU) systems are conventionally trained using ...
When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken la...
This paper presents a method for reducing the effort of transcribing user utterances to develop lang...
Training a text-to-speech (TTS) model requires a large scale text labeled speech corpus, which is tr...
The current generation of neural network-based natural language processing models excels at learning...
The current generation of neural network-based natural language processing models excels at learning...
Spoken language understanding (SLU) tasks such as goal estimation and intention identifi-cation from...
Voice Assistants such as Alexa, Siri, and Google Assistant typically use a two-stage Spoken Language...
Recent breakthroughs in deep learning often rely on representation learning and knowledge transfer. ...
The lack of publicly available evaluation data for low-resource languages limits progress in Spoken ...
© 2018 International Speech Communication Association. All rights reserved. Designing a spoken langu...
Recent breakthroughs in deep learning often rely on representation learning and knowledge transfer. ...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
This paper addresses the problem of multi-domain spoken language understanding (SLU) where domain de...
In this article, we propose a simple yet effective approach to train an end-to-end speech recognitio...
Models for statistical spoken language understanding (SLU) systems are conventionally trained using ...
When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken la...
This paper presents a method for reducing the effort of transcribing user utterances to develop lang...
Training a text-to-speech (TTS) model requires a large scale text labeled speech corpus, which is tr...
The current generation of neural network-based natural language processing models excels at learning...
The current generation of neural network-based natural language processing models excels at learning...
Spoken language understanding (SLU) tasks such as goal estimation and intention identifi-cation from...
Voice Assistants such as Alexa, Siri, and Google Assistant typically use a two-stage Spoken Language...
Recent breakthroughs in deep learning often rely on representation learning and knowledge transfer. ...
The lack of publicly available evaluation data for low-resource languages limits progress in Spoken ...
© 2018 International Speech Communication Association. All rights reserved. Designing a spoken langu...
Recent breakthroughs in deep learning often rely on representation learning and knowledge transfer. ...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
This paper addresses the problem of multi-domain spoken language understanding (SLU) where domain de...
In this article, we propose a simple yet effective approach to train an end-to-end speech recognitio...
Models for statistical spoken language understanding (SLU) systems are conventionally trained using ...