Conversational agents such as Alexa and Google Assistant constantly need to increase their language understanding capabilities by adding new domains. A massive amount of labeled data is required for training each new domain. While domain adaptation approaches alleviate the annotation cost, prior approaches suffer from increased training time and suboptimal concept alignments. To tackle this, we introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains, and enjoys efficient training without any explicit concept alignments. Extensive experimentation over a dataset of 10 domains relevant to our commercial personal digital assistant shows that our m...
Existing spoken dialogue systems are typically designed to operate in a static and well-defined doma...
Slot filling is a critical task in natural language understanding (NLU) for dialog systems. State-of...
For many (minority) languages, the resources needed to train large models are not available. We inve...
Few-shot slot tagging is an important task in dialogue systems and attracts much attention of resear...
Slot filling is a core operation for utterance understanding in task-oriented dialogue systems. Slot...
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the...
This paper addresses zero-shot slot filling, which tries to build a system that can generalize to un...
Recent task-oriented dialogue systems learn a model from annotated dialogues, and such dialogues a...
Project page: https://astra-vision.github.io/PODA/Domain adaptation has been vastly investigated in ...
Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encod...
Pretrained language models have shown success in various areas of natural language processing, inclu...
2018-11-26Developing intelligent systems for vision and language understanding has long been a cruci...
International audienceIn conventional domain adaptation for speaker diarization, a large collection ...
This paper proposes a novel Language Model (LM) adaptation method based on Minimum Discrimination In...
We propose online unsupervised domain adaptation (DA), which is performed in-crementally as data com...
Existing spoken dialogue systems are typically designed to operate in a static and well-defined doma...
Slot filling is a critical task in natural language understanding (NLU) for dialog systems. State-of...
For many (minority) languages, the resources needed to train large models are not available. We inve...
Few-shot slot tagging is an important task in dialogue systems and attracts much attention of resear...
Slot filling is a core operation for utterance understanding in task-oriented dialogue systems. Slot...
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the...
This paper addresses zero-shot slot filling, which tries to build a system that can generalize to un...
Recent task-oriented dialogue systems learn a model from annotated dialogues, and such dialogues a...
Project page: https://astra-vision.github.io/PODA/Domain adaptation has been vastly investigated in ...
Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encod...
Pretrained language models have shown success in various areas of natural language processing, inclu...
2018-11-26Developing intelligent systems for vision and language understanding has long been a cruci...
International audienceIn conventional domain adaptation for speaker diarization, a large collection ...
This paper proposes a novel Language Model (LM) adaptation method based on Minimum Discrimination In...
We propose online unsupervised domain adaptation (DA), which is performed in-crementally as data com...
Existing spoken dialogue systems are typically designed to operate in a static and well-defined doma...
Slot filling is a critical task in natural language understanding (NLU) for dialog systems. State-of...
For many (minority) languages, the resources needed to train large models are not available. We inve...