Large Language Models (LLMs) like the GPT and LLaMA families have demonstrated exceptional capabilities in capturing and condensing critical contextual information and achieving state-of-the-art performance in the summarization task. However, community concerns about these models' hallucination issues continue to rise. LLMs sometimes generate factually hallucinated summaries, which can be extremely harmful in the clinical domain NLP tasks (e.g., clinical note summarization), where factually incorrect statements can lead to critically erroneous diagnoses. Fine-tuning LLMs using human feedback has shown the promise of aligning LLMs to be factually consistent during generation, but such training procedure requires high-quality human-annotated ...
valuating automatically generated text is generally hard due to the inherently subjective nature of ...
In this paper, we present an innovative Natural Language Processing (NLP) algorithm for summarizing ...
While there has been recent progress in abstractive summarization as applied to different domains in...
Despite the recent progress in language generation models, their outputs may not always meet user ex...
While large language models (LLMs) already achieve strong performance on standard generic summarizat...
Abstract Recent advances in large language models (LLMs) have demonstrated remarkable successes in z...
The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread ...
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may con...
Despite the recent progress in text summarization made by large language models (LLMs), they often g...
Text summarization is a critical Natural Language Processing (NLP) task with applications ranging fr...
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tas...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
Automatic summarization methods are efficient but can suffer from low quality. In comparison, manual...
Factual inconsistencies in generated summaries severely limit the practical applications of abstract...
We present an empirical evaluation of various outputs generated by nine of the most widely-available...
valuating automatically generated text is generally hard due to the inherently subjective nature of ...
In this paper, we present an innovative Natural Language Processing (NLP) algorithm for summarizing ...
While there has been recent progress in abstractive summarization as applied to different domains in...
Despite the recent progress in language generation models, their outputs may not always meet user ex...
While large language models (LLMs) already achieve strong performance on standard generic summarizat...
Abstract Recent advances in large language models (LLMs) have demonstrated remarkable successes in z...
The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread ...
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may con...
Despite the recent progress in text summarization made by large language models (LLMs), they often g...
Text summarization is a critical Natural Language Processing (NLP) task with applications ranging fr...
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tas...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
Automatic summarization methods are efficient but can suffer from low quality. In comparison, manual...
Factual inconsistencies in generated summaries severely limit the practical applications of abstract...
We present an empirical evaluation of various outputs generated by nine of the most widely-available...
valuating automatically generated text is generally hard due to the inherently subjective nature of ...
In this paper, we present an innovative Natural Language Processing (NLP) algorithm for summarizing ...
While there has been recent progress in abstractive summarization as applied to different domains in...