A machine learning model, under the influence of observed or unobserved confounders in the training data, can learn spurious correlations and fail to generalize when deployed. For image classifiers, augmenting a training dataset using counterfactual examples has been empirically shown to break spurious correlations. However, the counterfactual generation task itself becomes more difficult as the level of confounding increases. Existing methods for counterfactual generation under confounding consider a fixed set of interventions (e.g., texture, rotation) and are not flexible enough to capture diverse data-generating processes. Given a causal generative process, we formally characterize the adverse effects of confounding on any downstream tas...
One of the primary challenges limiting the applicability of deep learning is its susceptibility to l...
Counterfactual explanations are a prominent example of post-hoc interpretability methods in the expl...
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the id...
Counterfactual examples for an input - perturbations that change specific features but not others - ...
Spurious correlations threaten the validity of statistical classifiers. While model accuracy may app...
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive task...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Nowadays, machine learning is being applied in various domains, including safety critical areas, whi...
Visual counterfactual explanations identify modifications to an image that would change the predicti...
Confounders in deep learning are in general detrimental to model's generalization where they infiltr...
As statistical classifiers become integrated into real-world applications, it is important to consid...
Notions of counterfactual invariance (CI) have proven essential for predictors that are fair, robust...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Variational autoencoders (VAEs) and other generative methods have garnered growing interest not just...
Counterfactual explanations are gaining popularity as a way of explaining machine learning models. C...
One of the primary challenges limiting the applicability of deep learning is its susceptibility to l...
Counterfactual explanations are a prominent example of post-hoc interpretability methods in the expl...
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the id...
Counterfactual examples for an input - perturbations that change specific features but not others - ...
Spurious correlations threaten the validity of statistical classifiers. While model accuracy may app...
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive task...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Nowadays, machine learning is being applied in various domains, including safety critical areas, whi...
Visual counterfactual explanations identify modifications to an image that would change the predicti...
Confounders in deep learning are in general detrimental to model's generalization where they infiltr...
As statistical classifiers become integrated into real-world applications, it is important to consid...
Notions of counterfactual invariance (CI) have proven essential for predictors that are fair, robust...
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. ...
Variational autoencoders (VAEs) and other generative methods have garnered growing interest not just...
Counterfactual explanations are gaining popularity as a way of explaining machine learning models. C...
One of the primary challenges limiting the applicability of deep learning is its susceptibility to l...
Counterfactual explanations are a prominent example of post-hoc interpretability methods in the expl...
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the id...