Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, often rely heavily on malignant bias as shortcuts instead of task-related information for discriminative tasks. To address this problem, recent studies utilize auxiliary information related to the bias, which is rarely obtainable in practice, or sift through a handful of bias-free samples for debiasing. However, the success of these methods is not always guaranteed due to the unfulfilled presumptions. In this paper, we propose a novel method, Contrastive Debiasing via Generative Bias-transformation (CDvG), which works without explicit bias labels or bias-free samples. Motivated by our observation that not only discriminative models but also i...
Computer vision algorithms, e.g. for face recognition, favour groups of individuals that are better ...
As deep learning becomes present in many applications, we must consider possible shortcomings of the...
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision r...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep Learning has achieved tremendous success in recent years in several areas such as image classif...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
Natural language understanding (NLU) models often rely on dataset biases rather than intended task-r...
International audienceMany datasets are biased, namely they contain easy-to-learn features that are ...
In image classification, debiasing aims to train a classifier to be less susceptible to dataset bias...
Improperly constructed datasets can result in inaccurate inferences. For instance, models trained on...
Neural networks are prone to be biased towards spurious correlations between classes and latent attr...
Neural networks often learn to make predictions that overly rely on spurious correlation existing in...
Deep learning models often learn to make predictions that rely on sensitive social attributes like g...
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples s...
International audienceDeep neural networks do not discriminate between spurious and causal patterns,...
Computer vision algorithms, e.g. for face recognition, favour groups of individuals that are better ...
As deep learning becomes present in many applications, we must consider possible shortcomings of the...
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision r...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep Learning has achieved tremendous success in recent years in several areas such as image classif...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
Natural language understanding (NLU) models often rely on dataset biases rather than intended task-r...
International audienceMany datasets are biased, namely they contain easy-to-learn features that are ...
In image classification, debiasing aims to train a classifier to be less susceptible to dataset bias...
Improperly constructed datasets can result in inaccurate inferences. For instance, models trained on...
Neural networks are prone to be biased towards spurious correlations between classes and latent attr...
Neural networks often learn to make predictions that overly rely on spurious correlation existing in...
Deep learning models often learn to make predictions that rely on sensitive social attributes like g...
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples s...
International audienceDeep neural networks do not discriminate between spurious and causal patterns,...
Computer vision algorithms, e.g. for face recognition, favour groups of individuals that are better ...
As deep learning becomes present in many applications, we must consider possible shortcomings of the...
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision r...