Abstract—Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes needed to action an output change in the decision is an interesting challenge for counterfactual explainers. The DisCERN algorithm introduced in this paper is a case-based counter-factual explainer. Here counterfactuals are formed by replacing feature values from a nearest unlike neighbour (NUN) until an actionable change is observed. We show how widely adopted feature relevance-based explainers (i.e. LIME, S...
Counterfactual explanations are gaining popularity as a way of explaining machine learning models. C...
Explanations in machine learning come in many forms, but a consensus regarding their desired propert...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machi...
Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a Machi...
Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machi...
Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, ...
Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, ...
Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machi...
Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machi...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or...
Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning cla...
The field of Explainable Artificial Intelligence (XAI) tries to make learned models more...
We propose a novel method for explaining the predictions of any classifier. In our approach, local e...
Counterfactual explanations are gaining popularity as a way of explaining machine learning models. C...
Explanations in machine learning come in many forms, but a consensus regarding their desired propert...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machi...
Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a Machi...
Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machi...
Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, ...
Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, ...
Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machi...
Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machi...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or...
Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning cla...
The field of Explainable Artificial Intelligence (XAI) tries to make learned models more...
We propose a novel method for explaining the predictions of any classifier. In our approach, local e...
Counterfactual explanations are gaining popularity as a way of explaining machine learning models. C...
Explanations in machine learning come in many forms, but a consensus regarding their desired propert...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...