Self-supervised Contrastive Learning (CL) has been recently shown to be very effective in preventing deep networks from overfitting noisy labels. Despite its empirical success, the theoretical understanding of the effect of contrastive learning on boosting robustness is very limited. In this work, we rigorously prove that the representation matrix learned by contrastive learning boosts robustness, by having: (i) one prominent singular value corresponding to each sub-class in the data, and significantly smaller remaining singular values; and (ii) {a large alignment between the prominent singular vectors and the clean labels of each sub-class. The above properties enable a linear layer trained on such representations to effectively learn the ...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Recent approaches in self-supervised learning of image representations can be categorized into diffe...
Non-contrastive methods of self-supervised learning (such as BYOL and SimSiam) learn representations...
Self-supervised contrastive learning is a powerful tool to learn visual representation without label...
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contra...
Deep neural networks are able to memorize noisy labels easily with a softmax cross-entropy (CE) loss...
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of superv...
Recently, contrastive learning has risen to be a promising approach for large-scale self-supervised ...
Recently the surprising discovery of the Bootstrap Your Own Latent (BYOL) method by Grill et al. sho...
While the empirical success of self-supervised learning (SSL) heavily relies on the usage of deep no...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of superv...
Contrastive learning aims to extract distinctive features from data by finding an embedding represen...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Recent approaches in self-supervised learning of image representations can be categorized into diffe...
Non-contrastive methods of self-supervised learning (such as BYOL and SimSiam) learn representations...
Self-supervised contrastive learning is a powerful tool to learn visual representation without label...
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contra...
Deep neural networks are able to memorize noisy labels easily with a softmax cross-entropy (CE) loss...
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of superv...
Recently, contrastive learning has risen to be a promising approach for large-scale self-supervised ...
Recently the surprising discovery of the Bootstrap Your Own Latent (BYOL) method by Grill et al. sho...
While the empirical success of self-supervised learning (SSL) heavily relies on the usage of deep no...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of superv...
Contrastive learning aims to extract distinctive features from data by finding an embedding represen...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Deep neural networks trained with standard cross-entropy loss memorize noisy labels, which degrades ...
Recent approaches in self-supervised learning of image representations can be categorized into diffe...
Non-contrastive methods of self-supervised learning (such as BYOL and SimSiam) learn representations...