CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9157-3/22/04. https://doi.org/10.1145/3491102.3517522Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias ...
Understanding and explaining the mistakes made by trained models is critical to many machine learnin...
In this paper two new learning-based eXplainable AI (XAI) methods for deep convolutional neural netw...
Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive an...
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA © 2022 Copyright held by the owner/author(s). AC...
AI explainability improves the transparency and trustworthiness of models. However, in the domain of...
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years du...
Despite their potential unknown deficiencies and biases, the takeover of critical tasks by AI machin...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
Bias detection in the computer vision field is a necessary task, to achieve fair models. These biase...
Deep learning models often learn to make predictions that rely on sensitive social attributes like g...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, ...
Vision Transformer (ViT) has recently gained significant interest in solving computer vision (CV) pr...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
We present results from a pilot experiment to measure if machine recommendations can debias human pe...
Understanding and explaining the mistakes made by trained models is critical to many machine learnin...
In this paper two new learning-based eXplainable AI (XAI) methods for deep convolutional neural netw...
Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive an...
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA © 2022 Copyright held by the owner/author(s). AC...
AI explainability improves the transparency and trustworthiness of models. However, in the domain of...
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years du...
Despite their potential unknown deficiencies and biases, the takeover of critical tasks by AI machin...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
Bias detection in the computer vision field is a necessary task, to achieve fair models. These biase...
Deep learning models often learn to make predictions that rely on sensitive social attributes like g...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, ...
Vision Transformer (ViT) has recently gained significant interest in solving computer vision (CV) pr...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
We present results from a pilot experiment to measure if machine recommendations can debias human pe...
Understanding and explaining the mistakes made by trained models is critical to many machine learnin...
In this paper two new learning-based eXplainable AI (XAI) methods for deep convolutional neural netw...
Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive an...