Advancement in large pretrained language models has significantly improved their performance for conditional language generation tasks including summarization albeit with hallucinations. To reduce hallucinations, conventional methods proposed improving beam search or using a fact checker as a postprocessing step. In this paper, we investigate the use of the Natural Language Inference (NLI) entailment metric to detect and prevent hallucinations in summary generation. We propose an NLI-assisted beam re-ranking mechanism by computing entailment probability scores between the input context and summarization model-generated beams during saliency-enhanced greedy decoding. Moreover, a diversity metric is introduced to compare its effectiveness aga...
This study mainly investigates two decoding problems in neural keyphrase generation: sequence length...
As one popular modeling approach for end-to-end speech recognition, attention-based encoder-decoder ...
Large Language Models (LLMs) have demonstrated remarkable proficiency in generating fluent text. How...
Large Language Models (LLMs) have demonstrated remarkable human-level natural language generation ca...
Hallucination is a known issue for neural abstractive summarization models. Recent work suggests tha...
Despite the recent progress in text summarization made by large language models (LLMs), they often g...
One of the challenges of developing a summarization model arises from the difficulty in measuring th...
Recently developed large language models have achieved remarkable success in generating fluent and c...
The faithfulness of abstractive text summarization at the named entities level is the focus of this ...
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the developme...
Hallucinations in text generation occur when the system produces text that is not grounded in the in...
Large Language Models (LLMs), such as ChatGPT/GPT-4, have garnered widespread attention owing to the...
Current abstractive summarization systems present important weaknesses which prevent their deploymen...
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are s...
The performance of natural language generation systems has improved substantially with modern neural...
This study mainly investigates two decoding problems in neural keyphrase generation: sequence length...
As one popular modeling approach for end-to-end speech recognition, attention-based encoder-decoder ...
Large Language Models (LLMs) have demonstrated remarkable proficiency in generating fluent text. How...
Large Language Models (LLMs) have demonstrated remarkable human-level natural language generation ca...
Hallucination is a known issue for neural abstractive summarization models. Recent work suggests tha...
Despite the recent progress in text summarization made by large language models (LLMs), they often g...
One of the challenges of developing a summarization model arises from the difficulty in measuring th...
Recently developed large language models have achieved remarkable success in generating fluent and c...
The faithfulness of abstractive text summarization at the named entities level is the focus of this ...
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the developme...
Hallucinations in text generation occur when the system produces text that is not grounded in the in...
Large Language Models (LLMs), such as ChatGPT/GPT-4, have garnered widespread attention owing to the...
Current abstractive summarization systems present important weaknesses which prevent their deploymen...
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are s...
The performance of natural language generation systems has improved substantially with modern neural...
This study mainly investigates two decoding problems in neural keyphrase generation: sequence length...
As one popular modeling approach for end-to-end speech recognition, attention-based encoder-decoder ...
Large Language Models (LLMs) have demonstrated remarkable proficiency in generating fluent text. How...