Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple visual concepts. In this paper, we introduce a scalable human-in-the-loop approach for global interpretability. Salient image areas identified by local interpretability methods are annotated with semantic concepts, which are then aggregated into a tabular representation of images to facilitate automatic statistical analysis of model behavior. We show that this approach answers interpretability needs for bot...
The success of deep learning in image recognition is substantially driven by large-scale, well-curat...
Summarization: In this paper a novel approach for automatically annotating image databases is propos...
The recent surge in highly successful, but opaque, machine-learning models has given rise to a dire ...
Global interpretability is a vital requirement for image classification applications. Existing inter...
Deep learning models have achieved state-of-the-art performance on several image classification task...
Providing interpretability of deep-learning models to non-experts, while fundamental for a responsib...
We describe a computational model of humans ' ability to provide a detailed interpretation of a...
To enable research on automated alignment/interpretability evaluations, we release the experimental ...
International audienceThis book compiles leading research on the development of explainable and inte...
A number of visual quality measures have been introduced in visual analytics literature in order to ...
In this paper, we study how to use semantic relationships for image classification in order to impro...
Saliency methods provide post-hoc model interpretation by attributing input features to the model ou...
The increasing impact of black box models, and particularly of unsupervised ones, comes with an incr...
International audienceConvolutional neural networks (CNN) are known to learn an image representation...
Finding relations between image semantics and image characteristics is a problem of long standing in...
The success of deep learning in image recognition is substantially driven by large-scale, well-curat...
Summarization: In this paper a novel approach for automatically annotating image databases is propos...
The recent surge in highly successful, but opaque, machine-learning models has given rise to a dire ...
Global interpretability is a vital requirement for image classification applications. Existing inter...
Deep learning models have achieved state-of-the-art performance on several image classification task...
Providing interpretability of deep-learning models to non-experts, while fundamental for a responsib...
We describe a computational model of humans ' ability to provide a detailed interpretation of a...
To enable research on automated alignment/interpretability evaluations, we release the experimental ...
International audienceThis book compiles leading research on the development of explainable and inte...
A number of visual quality measures have been introduced in visual analytics literature in order to ...
In this paper, we study how to use semantic relationships for image classification in order to impro...
Saliency methods provide post-hoc model interpretation by attributing input features to the model ou...
The increasing impact of black box models, and particularly of unsupervised ones, comes with an incr...
International audienceConvolutional neural networks (CNN) are known to learn an image representation...
Finding relations between image semantics and image characteristics is a problem of long standing in...
The success of deep learning in image recognition is substantially driven by large-scale, well-curat...
Summarization: In this paper a novel approach for automatically annotating image databases is propos...
The recent surge in highly successful, but opaque, machine-learning models has given rise to a dire ...