International audienceIn this work, we perform an analysis of the visualisation methods implemented in ProtoPNet and ProtoTree, two self-explaining visual classifiers based on prototypes. We show that such methods do not correctly identify the regions of interest inside of the images, and therefore do not reflect the model behaviour, which can create a false sense of bias in the model. We also demonstrate quantitatively that this issue can be mitigated by using other saliency methods that provide more faithful image patches
Deep learning models have become state-of-the-art in many areas, ranging from computer vision to mar...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image cla...
The quality of image generation and manipulation is reaching impressive levels, making it increasing...
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popula...
Image recognition with prototypes is considered an interpretable alternative for black box deep lear...
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared...
Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a p...
Prototypical part neural networks (ProtoPartNNs), namely PROTOPNET and its derivatives, are an intri...
Image understanding is a simple task for a human observer. Visual attention is automatically pointed...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Prototype-based methods use interpretable representations to address the black-box nature of deep le...
We shed light on the discrimination between patterns belonging to two different classes by casting t...
Abstract--In this paper we consider a technique for pattern classification based upon the developmen...
We shed light on the discrimination between patterns belonging to two different classes by casting t...
Researching formal models that explain selected natural phenomena of interest is a central aspect of...
Deep learning models have become state-of-the-art in many areas, ranging from computer vision to mar...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image cla...
The quality of image generation and manipulation is reaching impressive levels, making it increasing...
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popula...
Image recognition with prototypes is considered an interpretable alternative for black box deep lear...
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared...
Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a p...
Prototypical part neural networks (ProtoPartNNs), namely PROTOPNET and its derivatives, are an intri...
Image understanding is a simple task for a human observer. Visual attention is automatically pointed...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Prototype-based methods use interpretable representations to address the black-box nature of deep le...
We shed light on the discrimination between patterns belonging to two different classes by casting t...
Abstract--In this paper we consider a technique for pattern classification based upon the developmen...
We shed light on the discrimination between patterns belonging to two different classes by casting t...
Researching formal models that explain selected natural phenomena of interest is a central aspect of...
Deep learning models have become state-of-the-art in many areas, ranging from computer vision to mar...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image cla...
The quality of image generation and manipulation is reaching impressive levels, making it increasing...