Self-explaining deep models are designed to learn the latent concept-based explanations implicitly during training, which eliminates the requirement of any post-hoc explanation generation technique. In this work, we propose one such model that appends an explanation generation module on top of any basic network and jointly trains the whole module that shows high predictive performance and generates meaningful explanations in terms of concepts. Our training strategy is suitable for unsupervised concept learning with much lesser parameter space requirements compared to baseline methods. Our proposed model also has provision for leveraging self-supervision on concepts to extract better explanations. However, with full concept supervision, we a...
Similarity-based learning, which involves largely structural comparisons of instances, and explanati...
Since the introduction of the term explainable artificial intelligence (XAI), many contrasting defin...
Artificial Intelligence (AI) systems are increasingly dependent on machine learning models which la...
Traditional deep learning interpretability methods which are suitable for model users cannot explain...
This paper evaluates whether training a decision tree based on concepts extracted from a concept-bas...
In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the tra...
The importance of explaining the outcome of a machine learning model, especially a black-box model, ...
Due to the black-box nature of deep learning models, methods for explaining the models’ results are ...
In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the tra...
Deep Neural Network (DNN) models are challenging to interpret because of their highly complex and no...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Many explanation methods have been proposed to reveal insights about the internal procedures of blac...
An important line of research attempts to explain CNN image classifier predictions and intermediate ...
We present VeriX, a system for producing optimal robust explanations and generating counterfactuals ...
Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks...
Similarity-based learning, which involves largely structural comparisons of instances, and explanati...
Since the introduction of the term explainable artificial intelligence (XAI), many contrasting defin...
Artificial Intelligence (AI) systems are increasingly dependent on machine learning models which la...
Traditional deep learning interpretability methods which are suitable for model users cannot explain...
This paper evaluates whether training a decision tree based on concepts extracted from a concept-bas...
In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the tra...
The importance of explaining the outcome of a machine learning model, especially a black-box model, ...
Due to the black-box nature of deep learning models, methods for explaining the models’ results are ...
In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the tra...
Deep Neural Network (DNN) models are challenging to interpret because of their highly complex and no...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Many explanation methods have been proposed to reveal insights about the internal procedures of blac...
An important line of research attempts to explain CNN image classifier predictions and intermediate ...
We present VeriX, a system for producing optimal robust explanations and generating counterfactuals ...
Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks...
Similarity-based learning, which involves largely structural comparisons of instances, and explanati...
Since the introduction of the term explainable artificial intelligence (XAI), many contrasting defin...
Artificial Intelligence (AI) systems are increasingly dependent on machine learning models which la...