Current open-domain conversational models can easily be made to talk in inadequate ways. Online learning from conversational feedback given by the conversation partner is a promising avenue for a model to improve and adapt, so as to generate fewer of these safety failures. However, current state-of-the-art models tend to react to feedback with defensive or oblivious responses. This makes for an unpleasant experience and may discourage conversation partners from giving feedback in the future. This work proposes SaFeRDialogues, a task and dataset of graceful responses to conversational feedback about safety failures. We collect a dataset of 10k dialogues demonstrating safety failures, feedback signaling them, and a response acknowledging the ...
Dialogue systems and large language models (LLMs) have gained considerable attention. However, the d...
Miscommunication phenomena such as repair in dialogue are important indicators of the quality of com...
Axelsson A, Buschmeier H, Skantze G. Modeling feedback in interaction with conversational agents—A r...
Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by eit...
Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capab...
Developing a machine that can hold an engaging conversation with a human is one of the main challeng...
Target-guided response generation enables dialogue systems to smoothly transition a conversation fro...
Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgment...
With the development of large language models (LLMs), striking a balance between the performance and...
Conversational agents (CA) occasionally fail to understand the user's intention or respond inappropr...
As large language models are integrated into society, robustness toward a suite of prompts is increa...
User feedback can be an effective indicator to the success of the human-robot conversation. However,...
The goal of information-seeking dialogue is to respond to seeker queries with natural language utter...
Many socio-linguistic cues are used in conversational analysis, such as emotion, sentiment, and dial...
Conversational recommender systems (CRS) are interactive agents that support their users in recommen...
Dialogue systems and large language models (LLMs) have gained considerable attention. However, the d...
Miscommunication phenomena such as repair in dialogue are important indicators of the quality of com...
Axelsson A, Buschmeier H, Skantze G. Modeling feedback in interaction with conversational agents—A r...
Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by eit...
Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capab...
Developing a machine that can hold an engaging conversation with a human is one of the main challeng...
Target-guided response generation enables dialogue systems to smoothly transition a conversation fro...
Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgment...
With the development of large language models (LLMs), striking a balance between the performance and...
Conversational agents (CA) occasionally fail to understand the user's intention or respond inappropr...
As large language models are integrated into society, robustness toward a suite of prompts is increa...
User feedback can be an effective indicator to the success of the human-robot conversation. However,...
The goal of information-seeking dialogue is to respond to seeker queries with natural language utter...
Many socio-linguistic cues are used in conversational analysis, such as emotion, sentiment, and dial...
Conversational recommender systems (CRS) are interactive agents that support their users in recommen...
Dialogue systems and large language models (LLMs) have gained considerable attention. However, the d...
Miscommunication phenomena such as repair in dialogue are important indicators of the quality of com...
Axelsson A, Buschmeier H, Skantze G. Modeling feedback in interaction with conversational agents—A r...