Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc. While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive. In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks where the training objectives are given naturally according to the nature of the utterance and the structure of the multi-role conversation. Meanwhile, in order to locate essential information for dialogue summarization/extraction, the pretraining process enables external knowledge integration. The proposed fine-tuned pretraining mechanism is compreh...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few s...
Every model is only as strong as the data that it is trained on. In this paper, we present a new dat...
A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles. T...
Unsupervised machine learning ap-proaches hold great promise for recog-nizing dialogue acts, but the...
Richly annotated dialogue corpora are essential for new research directions in statistical learning ...
Sequence to sequence models attempt to capture the correlation between all the words in the input an...
Machine learning methods such as reinforcement learning applied to dialogue strategy optimization ha...
Building an intelligent dialogue system with the ability to select a proper response according to a ...
Abstract. This report introduces DIA-MOLE, a tool that supports an engineering-oriented approach tow...
Understanding user utterances in human-computer spoken dialogue systems involves a multi-level pragm...
Learning dialogue management models poses significant challenges. In a complex task-oriented domain ...
This paper introduces an engineering-oriented approach towards dialogue modelling. While dialogue mo...
Pre-trained models have proved to be powerful in enhancing task-oriented dialog systems. However, cu...
International audienceThis paper compares several approaches for computing dialogue turn embeddings ...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few s...
Every model is only as strong as the data that it is trained on. In this paper, we present a new dat...
A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles. T...
Unsupervised machine learning ap-proaches hold great promise for recog-nizing dialogue acts, but the...
Richly annotated dialogue corpora are essential for new research directions in statistical learning ...
Sequence to sequence models attempt to capture the correlation between all the words in the input an...
Machine learning methods such as reinforcement learning applied to dialogue strategy optimization ha...
Building an intelligent dialogue system with the ability to select a proper response according to a ...
Abstract. This report introduces DIA-MOLE, a tool that supports an engineering-oriented approach tow...
Understanding user utterances in human-computer spoken dialogue systems involves a multi-level pragm...
Learning dialogue management models poses significant challenges. In a complex task-oriented domain ...
This paper introduces an engineering-oriented approach towards dialogue modelling. While dialogue mo...
Pre-trained models have proved to be powerful in enhancing task-oriented dialog systems. However, cu...
International audienceThis paper compares several approaches for computing dialogue turn embeddings ...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few s...
Every model is only as strong as the data that it is trained on. In this paper, we present a new dat...