A novel explainable AI method called CLEAR Image is introduced in this paper. CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable. CLEAR Image explains an image's classification probability by contrasting the image with a corresponding image generated automatically via adversarial learning. This enables both salient segmentation and perturbations that faithfully determine each segment's importance. CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27% using a novel pointing game metric. CLEAR Image excels in identifying cases of "causal overdetermination" where there are multiple patch...
We present a simple regularization of adversarial perturbations based upon the perceptual loss. Whil...
With the recent surge of state-of-the-art AI systems, there has been a growing need to provide expla...
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or...
With the ongoing rise of machine learning, the need for methods for explaining decisions made by art...
Counterfactual examples for an input - perturbations that change specific features but not others - ...
There is a growing concern that the recent progress made in AI, especially regarding the predictive ...
The same method that creates adversarial examples (AEs) to fool image-classifiers can be used to gen...
Visual counterfactual explanations identify modifications to an image that would change the predicti...
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive task...
Machine learning models are widely used in various industries. However, the black-box nature of the ...
Despite their potential unknown deficiencies and biases, the takeover of critical tasks by AI machin...
This study investigates the impact of machine learning models on the generation of counterfactual ex...
A visual counterfactual explanation replaces image regions in a query image with regions from a dist...
To be published in ICPR 2020Explaining decisions of black-box classifiers is paramount in sensitive ...
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the id...
We present a simple regularization of adversarial perturbations based upon the perceptual loss. Whil...
With the recent surge of state-of-the-art AI systems, there has been a growing need to provide expla...
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or...
With the ongoing rise of machine learning, the need for methods for explaining decisions made by art...
Counterfactual examples for an input - perturbations that change specific features but not others - ...
There is a growing concern that the recent progress made in AI, especially regarding the predictive ...
The same method that creates adversarial examples (AEs) to fool image-classifiers can be used to gen...
Visual counterfactual explanations identify modifications to an image that would change the predicti...
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive task...
Machine learning models are widely used in various industries. However, the black-box nature of the ...
Despite their potential unknown deficiencies and biases, the takeover of critical tasks by AI machin...
This study investigates the impact of machine learning models on the generation of counterfactual ex...
A visual counterfactual explanation replaces image regions in a query image with regions from a dist...
To be published in ICPR 2020Explaining decisions of black-box classifiers is paramount in sensitive ...
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the id...
We present a simple regularization of adversarial perturbations based upon the perceptual loss. Whil...
With the recent surge of state-of-the-art AI systems, there has been a growing need to provide expla...
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or...