The performance of machine learning models under distribution shift has been the focus of the community in recent years. Most of current methods have been proposed to improve the robustness to distribution shift from the algorithmic perspective, i.e., designing better training algorithms to help the generalization in shifted test distributions. This paper studies the distribution shift problem from the perspective of pre-training and data augmentation, two important factors in the practice of deep learning that have not been systematically investigated by existing work. By evaluating seven pre-trained models, including ResNets and ViT's with self-supervision and supervision mode, on five important distribution-shift datasets, from WILDS and...
Robustness to natural distribution shifts has seen remarkable progress thanks to recent pre-training...
Recent interest in the external validity of prediction models (i.e., the problem of different train ...
Deployed machine learning (ML) models often encounter new user data that differs from their training...
Designing robust models is critical for reliable deployment of artificial intelligence systems. Deep...
Machine learning algorithms typically assume that training and test examples are drawn from the same...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
One of the biggest challenges of employing supervised deep learning approaches is their inability to...
A common use case of machine learning in real world settings is to learn a model from historical dat...
When deployed in the real world, machine learning models inevitably encounter changes in the data di...
The advancement of neural network models has led to state-of-the-art performance in a wide range of ...
peer reviewedSimilar to traditional software that is constantly under evolution, deep neural network...
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e....
Deploying machine learning models to new tasks is a major challenge despite the large size of the mo...
As machine learning becomes a progressively empirical field, the need for rigorous empirical evaluat...
Training models that perform well under distribution shifts is a central challenge in machine learni...
Robustness to natural distribution shifts has seen remarkable progress thanks to recent pre-training...
Recent interest in the external validity of prediction models (i.e., the problem of different train ...
Deployed machine learning (ML) models often encounter new user data that differs from their training...
Designing robust models is critical for reliable deployment of artificial intelligence systems. Deep...
Machine learning algorithms typically assume that training and test examples are drawn from the same...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
One of the biggest challenges of employing supervised deep learning approaches is their inability to...
A common use case of machine learning in real world settings is to learn a model from historical dat...
When deployed in the real world, machine learning models inevitably encounter changes in the data di...
The advancement of neural network models has led to state-of-the-art performance in a wide range of ...
peer reviewedSimilar to traditional software that is constantly under evolution, deep neural network...
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e....
Deploying machine learning models to new tasks is a major challenge despite the large size of the mo...
As machine learning becomes a progressively empirical field, the need for rigorous empirical evaluat...
Training models that perform well under distribution shifts is a central challenge in machine learni...
Robustness to natural distribution shifts has seen remarkable progress thanks to recent pre-training...
Recent interest in the external validity of prediction models (i.e., the problem of different train ...
Deployed machine learning (ML) models often encounter new user data that differs from their training...