An important line of research attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human understandable concepts. In this work, we expand on previous works in the literature that use annotated concept datasets to extract interpretable feature space directions and propose an unsupervised post-hoc method to extract a disentangling interpretable basis by looking for the rotation of the feature space that explains sparse one-hot thresholded transformed representations of pixel activations. We do experimentation with existing popular CNNs and demonstrate the effectiveness of our method in extracting an interpretable basis across network architectures and training datasets. We make extensions to ...
The field of deep learning is evolving in different directions, with still the need for more efficie...
Deep learning explainability is often reached by gradient-based approaches that attribute the networ...
Artificial Intelligence (AI) is increasingly affecting people’s lives. AI is even employed in fields...
Convolutional neural network (CNN) models for computer vision are powerful but lack explainability i...
This paper evaluates whether training a decision tree based on concepts extracted from a concept-bas...
Providing interpretability of deep-learning models to non-experts, while fundamental for a responsib...
Convolutional neural networks are being increasingly used in critical systems, where ensuring their ...
Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme res...
Recent research in deep learning methodology has led to a variety of complex modelling techniques in...
Explanations of the decisions made by a deep neural network are important for human end-users to be ...
Traditional deep learning interpretability methods which are suitable for model users cannot explain...
Safety-critical applications (e.g., autonomous vehicles, human-machine teaming, and automated medica...
Deep Learning has attained state-of-the-art performance in the recent years, but it is still hard to...
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden represent...
Neural networks (NNs) have reached remarkable performance in computer vision. However, numerous para...
The field of deep learning is evolving in different directions, with still the need for more efficie...
Deep learning explainability is often reached by gradient-based approaches that attribute the networ...
Artificial Intelligence (AI) is increasingly affecting people’s lives. AI is even employed in fields...
Convolutional neural network (CNN) models for computer vision are powerful but lack explainability i...
This paper evaluates whether training a decision tree based on concepts extracted from a concept-bas...
Providing interpretability of deep-learning models to non-experts, while fundamental for a responsib...
Convolutional neural networks are being increasingly used in critical systems, where ensuring their ...
Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme res...
Recent research in deep learning methodology has led to a variety of complex modelling techniques in...
Explanations of the decisions made by a deep neural network are important for human end-users to be ...
Traditional deep learning interpretability methods which are suitable for model users cannot explain...
Safety-critical applications (e.g., autonomous vehicles, human-machine teaming, and automated medica...
Deep Learning has attained state-of-the-art performance in the recent years, but it is still hard to...
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden represent...
Neural networks (NNs) have reached remarkable performance in computer vision. However, numerous para...
The field of deep learning is evolving in different directions, with still the need for more efficie...
Deep learning explainability is often reached by gradient-based approaches that attribute the networ...
Artificial Intelligence (AI) is increasingly affecting people’s lives. AI is even employed in fields...