We study an important task of attacking natural language processing models in a black box setting. We propose an attack strategy that crafts semantically similar adversarial examples on text classification and entailment tasks. Our proposed attack finds candidate words by considering the information of both the original word and its surrounding context. It jointly leverages masked language modelling and next sentence prediction for context understanding. In comparison to attacks proposed in prior literature, we are able to generate high quality adversarial examples that do significantly better both in terms of success rate and word perturbation percentage
Named Entity Recognition is a fundamental task in information extraction and is an essential element...
The frustratingly fragile nature of neural network models make current natural language generation (...
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of ad...
We study an important and challenging task of attacking natural language processing models in a hard...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
Generating adversarial examples for natural language is hard, as natural language consists of discre...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
Deep learning based systems are susceptible to adversarial attacks, where a small, imperceptible cha...
Adversarial attacks are a major challenge faced by current machine learning research. These purposel...
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial att...
Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversari...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
Named Entity Recognition is a fundamental task in information extraction and is an essential element...
The frustratingly fragile nature of neural network models make current natural language generation (...
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of ad...
We study an important and challenging task of attacking natural language processing models in a hard...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
Generating adversarial examples for natural language is hard, as natural language consists of discre...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
Deep learning based systems are susceptible to adversarial attacks, where a small, imperceptible cha...
Adversarial attacks are a major challenge faced by current machine learning research. These purposel...
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial att...
Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversari...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
Named Entity Recognition is a fundamental task in information extraction and is an essential element...
The frustratingly fragile nature of neural network models make current natural language generation (...
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of ad...