Due to the black-box nature of deep learning models, methods for explaining the models’ results are crucial to gain trust from humans and support collaboration between AIs and humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2) justifying model predictions, and (3) helping humans investigate uncertain predictions. The results highlight dissimilar qualities of the various explanation methods we consider and show the degree to which these methods could serve for each purpose
Modern machine learning methods allow for complex and in-depth analytics, but the predictive models ...
Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the pred...
Despite AI and Neural Networks model had an overwhelming evolution during the past decade, their app...
Despite the high accuracy offered by state-of-the-art deep natural-language models (e.g., LSTM, BERT...
We propose an approach to faithfully explaining text classification models, using a specifically des...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
In the past decade, natural language processing (NLP) systems have come to be built almost exclusive...
The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explor...
With more data and computing resources available these days, we have seen many novel Natural Languag...
Item does not contain fulltextIssues regarding explainable AI involve four components: users, laws a...
Many explanation methods have been proposed to reveal insights about the internal procedures of blac...
This paper evaluates whether training a decision tree based on concepts extracted from a concept-bas...
Thesis (Ph.D.)--University of Washington, 2018Despite many successes, complex machine learning syste...
As the use of deep learning techniques has grown across various fields over the past decade, complai...
As deep learning methods have obtained tremendous success over the years, our understanding of these...
Modern machine learning methods allow for complex and in-depth analytics, but the predictive models ...
Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the pred...
Despite AI and Neural Networks model had an overwhelming evolution during the past decade, their app...
Despite the high accuracy offered by state-of-the-art deep natural-language models (e.g., LSTM, BERT...
We propose an approach to faithfully explaining text classification models, using a specifically des...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
In the past decade, natural language processing (NLP) systems have come to be built almost exclusive...
The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explor...
With more data and computing resources available these days, we have seen many novel Natural Languag...
Item does not contain fulltextIssues regarding explainable AI involve four components: users, laws a...
Many explanation methods have been proposed to reveal insights about the internal procedures of blac...
This paper evaluates whether training a decision tree based on concepts extracted from a concept-bas...
Thesis (Ph.D.)--University of Washington, 2018Despite many successes, complex machine learning syste...
As the use of deep learning techniques has grown across various fields over the past decade, complai...
As deep learning methods have obtained tremendous success over the years, our understanding of these...
Modern machine learning methods allow for complex and in-depth analytics, but the predictive models ...
Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the pred...
Despite AI and Neural Networks model had an overwhelming evolution during the past decade, their app...