Real world uses of deep learning require predictable model behavior under distribution shifts. Models such as CLIP show emergent natural distributional robustness comparable to humans, but may require hundreds of millions of training samples. Can we train robust learners in a domain where data is limited? To rigorously address this question, we introduce JANuS (Joint Annotations and Names Set), a collection of four new training datasets with images, labels, and corresponding captions, and perform a series of carefully controlled investigations of factors contributing to robustness in image classification, then compare those results to findings derived from a large-scale meta-analysis. Using this approach, we show that standard ResNet-50 tra...
Though Deep Convolutional Neural Networks (DCNN) have shown success in many tasks in the field of co...
In this work, we borrow tools from the field of adversarial robustness, and propose a new framework ...
Self-supervised contrastive learning is a powerful tool to learn visual representation without label...
We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) ...
Robustness to natural distribution shifts has seen remarkable progress thanks to recent pre-training...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
A recent trend in deep learning algorithms has been towards training large scale models, having high...
Despite their impressive performance on large-scale benchmarks, machine learning sys- tems turn out ...
Over the last decade, the development of deep image classification networks has mostly been driven b...
Contrastively trained image-text models such as CLIP, ALIGN, and BASIC have demonstrated unprecedent...
Deep neural networks for computer vision are deployed in increasingly safety-critical and socially-i...
This dataset contains examples of semantically-perturbed images, for NeurIPS 2020 submission #4915. ...
Deep neural networks have achieved impressive results in many image classification tasks. However, s...
In this work, we borrow tools from the field of adversarial robustness, and propose a new framework ...
Robustness of a model plays a vital role in large scale machine learning. Classical estimators in ro...
Though Deep Convolutional Neural Networks (DCNN) have shown success in many tasks in the field of co...
In this work, we borrow tools from the field of adversarial robustness, and propose a new framework ...
Self-supervised contrastive learning is a powerful tool to learn visual representation without label...
We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) ...
Robustness to natural distribution shifts has seen remarkable progress thanks to recent pre-training...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
A recent trend in deep learning algorithms has been towards training large scale models, having high...
Despite their impressive performance on large-scale benchmarks, machine learning sys- tems turn out ...
Over the last decade, the development of deep image classification networks has mostly been driven b...
Contrastively trained image-text models such as CLIP, ALIGN, and BASIC have demonstrated unprecedent...
Deep neural networks for computer vision are deployed in increasingly safety-critical and socially-i...
This dataset contains examples of semantically-perturbed images, for NeurIPS 2020 submission #4915. ...
Deep neural networks have achieved impressive results in many image classification tasks. However, s...
In this work, we borrow tools from the field of adversarial robustness, and propose a new framework ...
Robustness of a model plays a vital role in large scale machine learning. Classical estimators in ro...
Though Deep Convolutional Neural Networks (DCNN) have shown success in many tasks in the field of co...
In this work, we borrow tools from the field of adversarial robustness, and propose a new framework ...
Self-supervised contrastive learning is a powerful tool to learn visual representation without label...