Models notoriously suffer from dataset biases which are detrimental to robustness and generalization. The identify-emphasize paradigm shows a promising effect in dealing with unknown biases. However, we find that it is still plagued by two challenges: A, the quality of the identified bias-conflicting samples is far from satisfactory; B, the emphasizing strategies just yield suboptimal performance. In this work, for challenge A, we propose an effective bias-conflicting scoring method to boost the identification accuracy with two practical strategies -- peer-picking and epoch-ensemble. For challenge B, we point out that the gradient contribution statistics can be a reliable indicator to inspect whether the optimization is dominated by bias-al...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, ...
Classical wisdom in machine learning holds that the generalization error can be decomposed into bias...
Models notoriously suffer from dataset biases which are detrimental to robustness and generalization...
In image classification, debiasing aims to train a classifier to be less susceptible to dataset bias...
Neural networks are prone to be biased towards spurious correlations between classes and latent attr...
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision r...
Neural networks often learn to make predictions that overly rely on spurious correlation existing in...
Deep Learning has achieved tremendous success in recent years in several areas such as image classif...
Biased data represents a significant challenge for the proper functioning of machine learning models...
Improperly constructed datasets can result in inaccurate inferences. For instance, models trained on...
Neural networks often make predictions relying on the spurious correlations from the datasets rather...
Thesis (Ph.D.)--University of Washington, 2020Modern machine learning algorithms have been able to a...
Machine learning models are biased when trained on biased datasets. Many recent approaches have been...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, ...
Classical wisdom in machine learning holds that the generalization error can be decomposed into bias...
Models notoriously suffer from dataset biases which are detrimental to robustness and generalization...
In image classification, debiasing aims to train a classifier to be less susceptible to dataset bias...
Neural networks are prone to be biased towards spurious correlations between classes and latent attr...
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision r...
Neural networks often learn to make predictions that overly rely on spurious correlation existing in...
Deep Learning has achieved tremendous success in recent years in several areas such as image classif...
Biased data represents a significant challenge for the proper functioning of machine learning models...
Improperly constructed datasets can result in inaccurate inferences. For instance, models trained on...
Neural networks often make predictions relying on the spurious correlations from the datasets rather...
Thesis (Ph.D.)--University of Washington, 2020Modern machine learning algorithms have been able to a...
Machine learning models are biased when trained on biased datasets. Many recent approaches have been...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, ...
Classical wisdom in machine learning holds that the generalization error can be decomposed into bias...