With the development of pre-trained language models, remarkable success has been witnessed in dialogue understanding (DU). However, current DU approaches usually employ independent models for each distinct DU task without considering shared knowledge across different DU tasks. In this paper, we propose a unified generative dialogue understanding framework, named {\em UniDU}, to achieve effective information exchange across diverse DU tasks. Here, we reformulate all DU tasks into a unified prompt-based generative model paradigm. More importantly, a novel model-agnostic multi-task training strategy (MATS) is introduced to dynamically adapt the weights of diverse tasks for best knowledge sharing during training, based on the nature and availab...
As an essential component in task-oriented dialogue systems, dialogue state tracking (DST) aims to t...
Although many pretrained models exist for text or images, there have been relatively fewer attempts ...
Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointl...
Building a universal conversational agent has been a long-standing goal of the dialogue research com...
Knowledge-grounded dialogue systems are challenging to build due to the lack of training data and he...
Every model is only as strong as the data that it is trained on. In this paper, we present a new dat...
Learning high-quality dialogue representations is essential for solving a variety of dialogue-orient...
We present NLU++, a novel dataset for natural language understanding (NLU) in task-oriented dialogue...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
oai:dad.uni-bielefeld.de:article/3698In this paper, we construct and train end-to-end neural network...
Pre-trained language models (PrLMs) have achieved great success on a wide range of natural language ...
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leverage...
International audienceEvery model is only as strong as the data that it is trained on. In this paper...
Dialogue act annotations are important to improve response generation quality in task-oriented dialo...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
As an essential component in task-oriented dialogue systems, dialogue state tracking (DST) aims to t...
Although many pretrained models exist for text or images, there have been relatively fewer attempts ...
Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointl...
Building a universal conversational agent has been a long-standing goal of the dialogue research com...
Knowledge-grounded dialogue systems are challenging to build due to the lack of training data and he...
Every model is only as strong as the data that it is trained on. In this paper, we present a new dat...
Learning high-quality dialogue representations is essential for solving a variety of dialogue-orient...
We present NLU++, a novel dataset for natural language understanding (NLU) in task-oriented dialogue...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
oai:dad.uni-bielefeld.de:article/3698In this paper, we construct and train end-to-end neural network...
Pre-trained language models (PrLMs) have achieved great success on a wide range of natural language ...
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leverage...
International audienceEvery model is only as strong as the data that it is trained on. In this paper...
Dialogue act annotations are important to improve response generation quality in task-oriented dialo...
The goal of building dialogue agents that can converse with humans naturally has been a long-standin...
As an essential component in task-oriented dialogue systems, dialogue state tracking (DST) aims to t...
Although many pretrained models exist for text or images, there have been relatively fewer attempts ...
Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointl...