Despite their promising performance across various natural language processing (NLP) tasks, current NLP systems are vulnerable to textual adversarial attacks. To defend against these attacks, most existing methods apply adversarial training by incorporating adversarial examples. However, these methods have to rely on ground-truth labels to generate adversarial examples, rendering it impractical for large-scale model pre-training which is commonly used nowadays for NLP and many other tasks. In this paper, we propose a novel learning framework called SCAT (Self-supervised Contrastive Learning via Adversarial Training), which can learn robust representations without requiring labeled data. Specifically, SCAT modifies random augmentations of th...
In recent years, the neural networks are widely used in image processing, natural language processin...
Traditional text classification requires thousands of annotated data or an additional Neural Machine...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
We propose a simple and general method to regularize the fine-tuning of Transformer-based encoders f...
Text classification is a basic task in natural language processing, but the small character perturba...
Adversarial training is the most empirically successful approach in improving the robustness of deep...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
Neural language models show vulnerability to adversarial examples which are semantically similar to ...
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for ...
We study an important and challenging task of attacking natural language processing models in a hard...
Contrastive self-supervised learning has become a prominent technique in representation learning. Th...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of superv...
In recent years, the neural networks are widely used in image processing, natural language processin...
Traditional text classification requires thousands of annotated data or an additional Neural Machine...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
We propose a simple and general method to regularize the fine-tuning of Transformer-based encoders f...
Text classification is a basic task in natural language processing, but the small character perturba...
Adversarial training is the most empirically successful approach in improving the robustness of deep...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
Neural language models show vulnerability to adversarial examples which are semantically similar to ...
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for ...
We study an important and challenging task of attacking natural language processing models in a hard...
Contrastive self-supervised learning has become a prominent technique in representation learning. Th...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of superv...
In recent years, the neural networks are widely used in image processing, natural language processin...
Traditional text classification requires thousands of annotated data or an additional Neural Machine...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...