Deep Neural Networks (DNNs) are prone to learning spurious features that correlate with the label during training but are irrelevant to the learning problem. This hurts model generalization and poses problems when deploying them in safety-critical applications. This paper aims to better understand the effects of spurious features through the lens of the learning dynamics of the internal neurons during the training process. We make the following observations: (1) While previous works highlight the harmful effects of spurious features on the generalization ability of DNNs, we emphasize that not all spurious features are harmful. Spurious features can be "benign" or "harmful" depending on whether they are "harder" or "easier" to learn than the...
Deep Neural Networks are known to be brittle to even minor distribution shifts compared to the train...
Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to mak...
Deep neural networks (DNNs) defy the classical bias-variance trade-off: adding parameters to a DNN t...
Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can...
Understanding how feature learning affects generalization is among the foremost goals of modern deep...
This is the final version. Available from ICLR via the link in this recordDeep neural networks (DNNs...
Several existing works study either adversarial or natural distributional robustness of deep neural ...
Many recent works indicate that the deep neural networks tend to take dataset biases as shortcuts to...
The generalization mystery in deep learning is the following: Why do over-parameterized neural netwo...
A machine learning (ML) system must learn not only to match the output of a target function on a tra...
Deep Learning of neural networks has progressively become more prominent in healthcare with models r...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object ...
Context: Deep Neural Networks (DNN) have shown great promise in various domains, for example to supp...
The ability of deep neural networks to generalise well even when they interpolate their training dat...
Deep Neural Networks are known to be brittle to even minor distribution shifts compared to the train...
Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to mak...
Deep neural networks (DNNs) defy the classical bias-variance trade-off: adding parameters to a DNN t...
Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can...
Understanding how feature learning affects generalization is among the foremost goals of modern deep...
This is the final version. Available from ICLR via the link in this recordDeep neural networks (DNNs...
Several existing works study either adversarial or natural distributional robustness of deep neural ...
Many recent works indicate that the deep neural networks tend to take dataset biases as shortcuts to...
The generalization mystery in deep learning is the following: Why do over-parameterized neural netwo...
A machine learning (ML) system must learn not only to match the output of a target function on a tra...
Deep Learning of neural networks has progressively become more prominent in healthcare with models r...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object ...
Context: Deep Neural Networks (DNN) have shown great promise in various domains, for example to supp...
The ability of deep neural networks to generalise well even when they interpolate their training dat...
Deep Neural Networks are known to be brittle to even minor distribution shifts compared to the train...
Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to mak...
Deep neural networks (DNNs) defy the classical bias-variance trade-off: adding parameters to a DNN t...