Large language models have achieved high performance on various question answering (QA) benchmarks, but the explainability of their output remains elusive. Structured explanations, called entailment trees, were recently suggested as a way to explain and inspect a QA system's answer. In order to better generate such entailment trees, we propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR). Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises. The IRGR model iteratively searches for suitable premises, constructing a single entailment step at a time. Contrary to previous approaches, our method combines generation steps and retrieval of premises, a...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
We present an approach for systematic reasoning that produces human interpretable proof trees ground...
In this paper, we describe precedent-based explanations for case-based classification systems. Previ...
Entailment trees have been proposed to simulate the human reasoning process of explanation generatio...
In settings from fact-checking to question answering, we frequently want to know whether a collectio...
Recent work has shown that inducing a large language model (LLM) to generate explanations prior to o...
The diversity and Zipfian frequency distribution of natural language predicates in corpora leads to ...
Models that generate extractive rationales (i.e., subsets of features) or natural language explanati...
A growing body of work studies how to answer a question or verify a claim by generating a natural la...
Knowledge-intensive tasks, such as open-domain question answering (QA), require access to a large am...
In this position paper, we propose a way of exploiting formal proofs to put forward several explaina...
Retrieval-augmented language models (RALMs) represent a substantial advancement in the capabilities ...
We present a new dataset and model for textual entailment, derived from treating multiple-choice que...
Logical reasoning remains a pivotal component within the realm of artificial intelligence. The recen...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
We present an approach for systematic reasoning that produces human interpretable proof trees ground...
In this paper, we describe precedent-based explanations for case-based classification systems. Previ...
Entailment trees have been proposed to simulate the human reasoning process of explanation generatio...
In settings from fact-checking to question answering, we frequently want to know whether a collectio...
Recent work has shown that inducing a large language model (LLM) to generate explanations prior to o...
The diversity and Zipfian frequency distribution of natural language predicates in corpora leads to ...
Models that generate extractive rationales (i.e., subsets of features) or natural language explanati...
A growing body of work studies how to answer a question or verify a claim by generating a natural la...
Knowledge-intensive tasks, such as open-domain question answering (QA), require access to a large am...
In this position paper, we propose a way of exploiting formal proofs to put forward several explaina...
Retrieval-augmented language models (RALMs) represent a substantial advancement in the capabilities ...
We present a new dataset and model for textual entailment, derived from treating multiple-choice que...
Logical reasoning remains a pivotal component within the realm of artificial intelligence. The recen...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
We present an approach for systematic reasoning that produces human interpretable proof trees ground...
In this paper, we describe precedent-based explanations for case-based classification systems. Previ...