Research on (multi-domain) task-oriented dialog (TOD) has predominantly focused on the English language, primarily due to the shortage of robust TOD datasets in other languages, preventing the systematic investigation of cross-lingual transfer for this crucial NLP application area. In this work, we introduce Multi2WOZ, a new multilingual multi-domain TOD dataset, derived from the well-established English dataset MultiWOZ, that spans four typologically diverse languages: Chinese, German, Arabic, and Russian. In contrast to concurrent efforts, Multi2WOZ contains gold-standard dialogs in target languages that are directly comparable with development and test portions of the English dataset, enabling reliable and comparative estimates of cross-...
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective appro...
This paper aims for a potential architectural improvement for multilingual learning and asks: Can di...
Large pre-trained multilingual models such as mBERT and XLM-R enabled effective cross-lingual zero-s...
In task-oriented dialogue (ToD), a user holds a conversation with an artificial agent to complete a ...
Recent progress in task-oriented neural dialogue systems is largely focused on a handful of language...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
Achieving robust language technologies that can perform well across the world's many languages is a ...
The research of open-domain, knowledge-grounded dialogue systems has been advancing rapidly due to t...
Recently, data-driven task-oriented dialogue systems have achieved promising performance in English....
Modern virtual assistants use internal semantic parsing engines to convert user utterances to action...
Abstract: The subject area of multilingual natural language processing (NLP) is concerned with the p...
The main goal behind state-of-the-art pretrained multilingual models such as multilingual BERT and X...
Task-oriented dialogue (ToD) systems have been mostly created for high-resource languages, such as E...
Slot labeling (SL) is a core component of task-oriented dialogue (ToD) systems, where slots and corr...
To support machine learning of cross-language prosodic mappings and other ways to improve speech-to-...
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective appro...
This paper aims for a potential architectural improvement for multilingual learning and asks: Can di...
Large pre-trained multilingual models such as mBERT and XLM-R enabled effective cross-lingual zero-s...
In task-oriented dialogue (ToD), a user holds a conversation with an artificial agent to complete a ...
Recent progress in task-oriented neural dialogue systems is largely focused on a handful of language...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
Achieving robust language technologies that can perform well across the world's many languages is a ...
The research of open-domain, knowledge-grounded dialogue systems has been advancing rapidly due to t...
Recently, data-driven task-oriented dialogue systems have achieved promising performance in English....
Modern virtual assistants use internal semantic parsing engines to convert user utterances to action...
Abstract: The subject area of multilingual natural language processing (NLP) is concerned with the p...
The main goal behind state-of-the-art pretrained multilingual models such as multilingual BERT and X...
Task-oriented dialogue (ToD) systems have been mostly created for high-resource languages, such as E...
Slot labeling (SL) is a core component of task-oriented dialogue (ToD) systems, where slots and corr...
To support machine learning of cross-language prosodic mappings and other ways to improve speech-to-...
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective appro...
This paper aims for a potential architectural improvement for multilingual learning and asks: Can di...
Large pre-trained multilingual models such as mBERT and XLM-R enabled effective cross-lingual zero-s...