Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each ...
Deep Learning has attained state-of-the-art performance in the recent years, but it is still hard to...
We present CartoonX (Cartoon Explanation), a novel model-agnostic explanation method tailored toward...
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods u...
Given the wide use of machine learning approaches based on opaque prediction models, understanding t...
Defining a representative locality is an urgent challenge in perturbation-based explanation methods,...
As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as ...
As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as ...
Understanding and interpreting classification decisions of automated image classification systems is...
For image recognition tasks, prediction models with ad-hoc structures and high performances have bee...
The importance of the neighborhood for training a local surrogate model to approximate the local dec...
Understanding and interpreting classification decisions of automated image classification systems is...
An important step towards explaining deep image classifiers lies in the identification of image regi...
The rise of sophisticated machine learning models has brought accurate but obscure decision systems,...
Despite widespread adoption of machine learning models in automatic decision making, many of them re...
Despite the widespread use, machine learning methods produce black box models. It is hard to underst...
Deep Learning has attained state-of-the-art performance in the recent years, but it is still hard to...
We present CartoonX (Cartoon Explanation), a novel model-agnostic explanation method tailored toward...
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods u...
Given the wide use of machine learning approaches based on opaque prediction models, understanding t...
Defining a representative locality is an urgent challenge in perturbation-based explanation methods,...
As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as ...
As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as ...
Understanding and interpreting classification decisions of automated image classification systems is...
For image recognition tasks, prediction models with ad-hoc structures and high performances have bee...
The importance of the neighborhood for training a local surrogate model to approximate the local dec...
Understanding and interpreting classification decisions of automated image classification systems is...
An important step towards explaining deep image classifiers lies in the identification of image regi...
The rise of sophisticated machine learning models has brought accurate but obscure decision systems,...
Despite widespread adoption of machine learning models in automatic decision making, many of them re...
Despite the widespread use, machine learning methods produce black box models. It is hard to underst...
Deep Learning has attained state-of-the-art performance in the recent years, but it is still hard to...
We present CartoonX (Cartoon Explanation), a novel model-agnostic explanation method tailored toward...
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods u...