Despite important progress, conversational systems often generate dialogues that sound unnatural to humans. We conjecture that the reason lies in the different training and testing conditions: agents are trained in a controlled “lab” setting but tested in the “wild”. During training, they learn to utter a sentence given the ground-truth dialogue history generated by human annotators. On the other hand, during testing, the agents must interact with each other, and hence deal with noisy data. We propose to fill this gap between the training and testing environments by training the model with mixed batches containing both samples of human and machine-generated dialogues. We assess the validity of the proposed method on GuessWhat?!, a visual re...
Conversational AI has seen tremendous progress in recent years, achieving near-human or even surpass...
Conversational AI has seen tremendous progress in recent years, achieving near-human or even surpass...
Most previous work on trainable language generation has focused on two paradigms: (a) using a statis...
Despite important progress, conversational systems often generate dialogues that sound unnatural to ...
Real human conversation data are complicated, heterogeneous, and noisy, from which building open-dom...
Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural netwo...
Training dialogue systems often entails dealing with noisy training examples and unexpected user inp...
Training dialogue systems often entails dealing with noisy training examples and unexpected user inp...
Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have...
Scarcity of user data continues to be a problem in research on conversational user interfaces and of...
To train a statistical spoken dialogue system (SDS) it is essential that an accurate method for meas...
The performance of adversarial dialogue generation models relies on the quality of the reward signal...
The performance of adversarial dialogue generation models relies on the quality of the reward signal...
In recent years we have witnessed a surge in machine learning methods that provide machines with con...
In Dynamic Adversarial Data Collection (DADC), human annotators are tasked with finding examples tha...
Conversational AI has seen tremendous progress in recent years, achieving near-human or even surpass...
Conversational AI has seen tremendous progress in recent years, achieving near-human or even surpass...
Most previous work on trainable language generation has focused on two paradigms: (a) using a statis...
Despite important progress, conversational systems often generate dialogues that sound unnatural to ...
Real human conversation data are complicated, heterogeneous, and noisy, from which building open-dom...
Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural netwo...
Training dialogue systems often entails dealing with noisy training examples and unexpected user inp...
Training dialogue systems often entails dealing with noisy training examples and unexpected user inp...
Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have...
Scarcity of user data continues to be a problem in research on conversational user interfaces and of...
To train a statistical spoken dialogue system (SDS) it is essential that an accurate method for meas...
The performance of adversarial dialogue generation models relies on the quality of the reward signal...
The performance of adversarial dialogue generation models relies on the quality of the reward signal...
In recent years we have witnessed a surge in machine learning methods that provide machines with con...
In Dynamic Adversarial Data Collection (DADC), human annotators are tasked with finding examples tha...
Conversational AI has seen tremendous progress in recent years, achieving near-human or even surpass...
Conversational AI has seen tremendous progress in recent years, achieving near-human or even surpass...
Most previous work on trainable language generation has focused on two paradigms: (a) using a statis...