ProtoPNet and its follow-up variants (ProtoPNets) have attracted broad research interest for their intrinsic interpretability from prototypes and comparable accuracy to non-interpretable counterparts. However, it has been recently found that the interpretability of prototypes can be corrupted due to the semantic gap between similarity in latent space and that in input space. In this work, we make the first attempt to quantitatively evaluate the interpretability of prototype-based explanations, rather than solely qualitative evaluations by some visualization examples, which can be easily misled by cherry picks. To this end, we propose two evaluation metrics, termed consistency score and stability score, to evaluate the explanation consistenc...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
Deep Neural Network (DNN) models are challenging to interpret because of their highly complex and no...
Explaining artificial intelligence (AI) predictions is increasingly important and even imperative in...
Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a p...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...
Prototypical part neural networks (ProtoPartNNs), namely PROTOPNET and its derivatives, are an intri...
Image recognition with prototypes is considered an interpretable alternative for black box deep lear...
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image cla...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Prototypical methods have recently gained a lot of attention due to their intrinsic interpretable na...
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess t...
Many explanation methods have been proposed to reveal insights about the internal procedures of blac...
International audiencePrototype networks (Li et al. 2018) provide explanations to users using a prot...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
Deep Neural Network (DNN) models are challenging to interpret because of their highly complex and no...
Explaining artificial intelligence (AI) predictions is increasingly important and even imperative in...
Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a p...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...
Prototypical part neural networks (ProtoPartNNs), namely PROTOPNET and its derivatives, are an intri...
Image recognition with prototypes is considered an interpretable alternative for black box deep lear...
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image cla...
Current machine learning models have shown high efficiency in solving a wide variety of real-world p...
Prototypical methods have recently gained a lot of attention due to their intrinsic interpretable na...
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess t...
Many explanation methods have been proposed to reveal insights about the internal procedures of blac...
International audiencePrototype networks (Li et al. 2018) provide explanations to users using a prot...
Machine learning models often exhibit complex behavior that is difficult to understand. Recent resea...
Deep Neural Network (DNN) models are challenging to interpret because of their highly complex and no...
Explaining artificial intelligence (AI) predictions is increasingly important and even imperative in...