Methods addressing spurious correlations such as Just Train Twice (JTT, arXiv:2107.09044v2) involve reweighting a subset of the training set to maximize the worst-group accuracy. However, the reweighted set of examples may potentially contain unlearnable examples that hamper the model's learning. We propose mitigating this by detecting outliers to the training set and removing them before reweighting. Our experiments show that our method achieves competitive or better accuracy compared with JTT and can detect and remove annotation errors in the subset being reweighted in JTT
Model overconfidence and poor calibration are common in machine learning and difficult to account fo...
Data for training a classification model can be considered to consist of two types of points: easy t...
Outlier data points are known to affect negatively the learning process of regression or classificat...
Spurious correlations in training data often lead to robustness issues since models learn to use the...
Traditional machine learning methods such as empirical risk minimization (ERM) frequently encounter ...
Outlier problem is one of the typical problems in an incomplete data based machine learning system [...
Deep neural networks trained by minimizing the average risk can achieve strong average performance. ...
Learning from outliers and imbalanced data remains one of the major difficulties for machine learnin...
This paper introduces two statistical outlier detection approaches by classes. Experiments on binar...
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious correlati...
Outlier data points are known to affect negatively the learning process of regression or classificat...
Supervised learning models are challenged by the intrinsic complexities of training data such as out...
In many applications, training data is provided in the form of related datasets obtained from severa...
This thesis describes novel approaches to the problem of outlier detection. It is one of the most im...
A familiar problem in machine learning is to determine which data points are outliers when the unde...
Model overconfidence and poor calibration are common in machine learning and difficult to account fo...
Data for training a classification model can be considered to consist of two types of points: easy t...
Outlier data points are known to affect negatively the learning process of regression or classificat...
Spurious correlations in training data often lead to robustness issues since models learn to use the...
Traditional machine learning methods such as empirical risk minimization (ERM) frequently encounter ...
Outlier problem is one of the typical problems in an incomplete data based machine learning system [...
Deep neural networks trained by minimizing the average risk can achieve strong average performance. ...
Learning from outliers and imbalanced data remains one of the major difficulties for machine learnin...
This paper introduces two statistical outlier detection approaches by classes. Experiments on binar...
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious correlati...
Outlier data points are known to affect negatively the learning process of regression or classificat...
Supervised learning models are challenged by the intrinsic complexities of training data such as out...
In many applications, training data is provided in the form of related datasets obtained from severa...
This thesis describes novel approaches to the problem of outlier detection. It is one of the most im...
A familiar problem in machine learning is to determine which data points are outliers when the unde...
Model overconfidence and poor calibration are common in machine learning and difficult to account fo...
Data for training a classification model can be considered to consist of two types of points: easy t...
Outlier data points are known to affect negatively the learning process of regression or classificat...