Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries. To address this issue, we propose to utilize natural language inference (NLI) models to improve coverage while avoiding introducing factual inconsistencies. Specifically, we use NLI to compute fine-grained training signals to encourage the model to generate content in the reference summaries that have not been covered, as well as to distinguish between factually consistent and inconsistent generated sentences. Experiments on the DialogSum and SAMSum datasets confirm the effectiveness of the proposed approach in balancing coverage and faithfulness, validated with automatic metrics and hu...
Prior research has shown that typical fact-checking models for stand-alone claims struggle with clai...
Grounded text generation systems often generate text that contains factual inconsistencies, hinderin...
Abstractive dialogue summarization has long been viewed as an important standalone task in natural l...
Factual inconsistencies in generated summaries severely limit the practical applications of abstract...
Despite the recent progress in language generation models, their outputs may not always meet user ex...
Dialogue summarization task involves summarizing long conversations while preserving the most salien...
While large language models (LLMs) already achieve strong performance on standard generic summarizat...
Neural abstractive summarization models are prone to generate summaries that are factually inconsist...
Dialogue summarization is a long-standing task in the field of NLP, and several data sets with dialo...
Despite the recent advances in abstractive text summarization, current summarization models still su...
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may con...
Canonical automatic summary evaluation metrics, such as ROUGE, focus on lexical similarity which can...
In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense kn...
Abstractive summarization has gained attention because of the positive performance of large-scale, p...
In this chapter, two empirical pilot studies on the role of politeness in dialogue summarization are...
Prior research has shown that typical fact-checking models for stand-alone claims struggle with clai...
Grounded text generation systems often generate text that contains factual inconsistencies, hinderin...
Abstractive dialogue summarization has long been viewed as an important standalone task in natural l...
Factual inconsistencies in generated summaries severely limit the practical applications of abstract...
Despite the recent progress in language generation models, their outputs may not always meet user ex...
Dialogue summarization task involves summarizing long conversations while preserving the most salien...
While large language models (LLMs) already achieve strong performance on standard generic summarizat...
Neural abstractive summarization models are prone to generate summaries that are factually inconsist...
Dialogue summarization is a long-standing task in the field of NLP, and several data sets with dialo...
Despite the recent advances in abstractive text summarization, current summarization models still su...
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may con...
Canonical automatic summary evaluation metrics, such as ROUGE, focus on lexical similarity which can...
In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense kn...
Abstractive summarization has gained attention because of the positive performance of large-scale, p...
In this chapter, two empirical pilot studies on the role of politeness in dialogue summarization are...
Prior research has shown that typical fact-checking models for stand-alone claims struggle with clai...
Grounded text generation systems often generate text that contains factual inconsistencies, hinderin...
Abstractive dialogue summarization has long been viewed as an important standalone task in natural l...