Current machine learning models have shown high efficiency in solving a wide variety of real-world problems. However, their black box character poses a major challenge for the comprehensibility and traceability of the underlying decision-making strategies. As a remedy, numerous post-hoc and self-explanation methods have been developed to interpret the models’ behavior. Those methods, in addition, enable the identification of artifacts that, inherent in the training data, can be erroneously learned by the model as class-relevant features. In this work, we provide a detailed case study of a representative for the state-of-the-art self-explaining network, ProtoPNet, in the presence of a spectrum of artifacts. Accordingly, we identify the main ...
Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks...
The importance of explaining the outcome of a machine learning model, especially a black-box model, ...
We present VeriX, a system for producing optimal robust explanations and generating counterfactuals ...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
ProtoPNet and its follow-up variants (ProtoPNets) have attracted broad research interest for their i...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...
© 2018 Curran Associates Inc.All rights reserved. Most recent work on interpretability of complex ma...
Robustness has become an important consideration in deep learning. With the help of explainable AI, ...
Machine learning approaches have enabled increasingly powerful time series classifiers. While perfor...
Self-explaining deep models are designed to learn the latent concept-based explanations implicitly d...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
A high-velocity paradigm shift towards Explainable Artificial Intelligence (XAI) has emerged in rece...
Deep Neural Networks (DNNs) are known as black box algorithmsthat lack transparency and interpretabi...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks...
The importance of explaining the outcome of a machine learning model, especially a black-box model, ...
We present VeriX, a system for producing optimal robust explanations and generating counterfactuals ...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
ProtoPNet and its follow-up variants (ProtoPNets) have attracted broad research interest for their i...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...
© 2018 Curran Associates Inc.All rights reserved. Most recent work on interpretability of complex ma...
Robustness has become an important consideration in deep learning. With the help of explainable AI, ...
Machine learning approaches have enabled increasingly powerful time series classifiers. While perfor...
Self-explaining deep models are designed to learn the latent concept-based explanations implicitly d...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
A high-velocity paradigm shift towards Explainable Artificial Intelligence (XAI) has emerged in rece...
Deep Neural Networks (DNNs) are known as black box algorithmsthat lack transparency and interpretabi...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks...
The importance of explaining the outcome of a machine learning model, especially a black-box model, ...
We present VeriX, a system for producing optimal robust explanations and generating counterfactuals ...