In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance. While prior work focuses on extractive rationales (a subset of the input words), we investigate their less-studied counterpart: free-text natural language rationales. We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales. We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established. We define label-rationale association as a necessary property for faithfu...
With recent advances in natural language processing, rationalization becomes an essential self-expla...
While large pre-trained language models are powerful, their predictions often lack logical consisten...
Thesis (Ph.D.)--University of Washington, 2020For machines to understand language, they must intuiti...
Neural language models (LMs) have achieved impressive results on various language-based reasoning ta...
Models that generate extractive rationales (i.e., subsets of features) or natural language explanati...
A growing line of work has investigated the development of neural NLP models that can produce ration...
In order to build reliable and trustworthy NLP applications, models need to be both fair across diff...
Models that generate extractive rationales (i.e., subsets of features) or natural language explanat...
This thesis focuses on model interpretability, an area concerned with under- standing model predicti...
An extractive rationale explains a language model's (LM's) prediction on a given task instance by hi...
Recent research on model interpretability in natural language processing extensively uses feature sc...
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability of generatin...
End-to-end neural NLP architectures are notoriously difficult to understand, which gives rise to num...
In the past decade, natural language processing (NLP) systems have come to be built almost exclusive...
In recent years, deep learning models have become very powerful – even outperforming humans on a va...
With recent advances in natural language processing, rationalization becomes an essential self-expla...
While large pre-trained language models are powerful, their predictions often lack logical consisten...
Thesis (Ph.D.)--University of Washington, 2020For machines to understand language, they must intuiti...
Neural language models (LMs) have achieved impressive results on various language-based reasoning ta...
Models that generate extractive rationales (i.e., subsets of features) or natural language explanati...
A growing line of work has investigated the development of neural NLP models that can produce ration...
In order to build reliable and trustworthy NLP applications, models need to be both fair across diff...
Models that generate extractive rationales (i.e., subsets of features) or natural language explanat...
This thesis focuses on model interpretability, an area concerned with under- standing model predicti...
An extractive rationale explains a language model's (LM's) prediction on a given task instance by hi...
Recent research on model interpretability in natural language processing extensively uses feature sc...
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability of generatin...
End-to-end neural NLP architectures are notoriously difficult to understand, which gives rise to num...
In the past decade, natural language processing (NLP) systems have come to be built almost exclusive...
In recent years, deep learning models have become very powerful – even outperforming humans on a va...
With recent advances in natural language processing, rationalization becomes an essential self-expla...
While large pre-trained language models are powerful, their predictions often lack logical consisten...
Thesis (Ph.D.)--University of Washington, 2020For machines to understand language, they must intuiti...