We call into question the recently popularized method of direct model editing as a means of correcting factual errors in LLM generations. We contrast model editing with three similar but distinct approaches that pursue better defined objectives: (1) retrieval-based architectures, which decouple factual memory from inference and linguistic capabilities embodied in LLMs; (2) concept erasure methods, which aim at preventing systemic bias in generated text; and (3) attribution methods, which aim at grounding generations into identified textual sources. We argue that direct model editing cannot be trusted as a systematic remedy for the disadvantages inherent to LLMs, and while it has proven potential in improving model explainability, it opens r...
Knowledge Editing (KE) for modifying factual knowledge in Large Language Models (LLMs) has been rece...
We present an empirical evaluation of various outputs generated by nine of the most widely-available...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Even the largest neural networks make errors, and once-correct predictions can become invalid as the...
Large Language Models (LLMs) make natural interfaces to factual knowledge, but their usefulness is l...
The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread ...
Language models learn a great quantity of factual information during pretraining, and recent work lo...
Large sequence to sequence models for tasks such as Neural Machine Translation (NMT) are usually tra...
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tas...
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may con...
Large Language Models (LLMs) are increasingly used for accessing information on the web. Their truth...
Large language models (LLMs) have achieved remarkable advancements in the field of natural language ...
Large language models (LLMs) have exploded in popularity in the past few years and have achieved und...
Large Language Models (LLMs) like the GPT and LLaMA families have demonstrated exceptional capabilit...
Language models, given their black-box nature, often exhibit sensitivity to input perturbations, lea...
Knowledge Editing (KE) for modifying factual knowledge in Large Language Models (LLMs) has been rece...
We present an empirical evaluation of various outputs generated by nine of the most widely-available...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Even the largest neural networks make errors, and once-correct predictions can become invalid as the...
Large Language Models (LLMs) make natural interfaces to factual knowledge, but their usefulness is l...
The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread ...
Language models learn a great quantity of factual information during pretraining, and recent work lo...
Large sequence to sequence models for tasks such as Neural Machine Translation (NMT) are usually tra...
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tas...
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may con...
Large Language Models (LLMs) are increasingly used for accessing information on the web. Their truth...
Large language models (LLMs) have achieved remarkable advancements in the field of natural language ...
Large language models (LLMs) have exploded in popularity in the past few years and have achieved und...
Large Language Models (LLMs) like the GPT and LLaMA families have demonstrated exceptional capabilit...
Language models, given their black-box nature, often exhibit sensitivity to input perturbations, lea...
Knowledge Editing (KE) for modifying factual knowledge in Large Language Models (LLMs) has been rece...
We present an empirical evaluation of various outputs generated by nine of the most widely-available...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...