We study an important and challenging task of attacking natural language processing models in a hard label black box setting. We propose a decision-based attack strategy that crafts high quality adversarial examples on text classification and entailment tasks. Our proposed attack strategy leverages population-based optimization algorithm to craft plausible and semantically similar adversarial examples by observing only the top label predicted by the target model. At each iteration, the optimization procedure allow word replacements that maximizes the overall semantic similarity between the original and the adversarial text. Further, our approach does not rely on using substitute models or any kind of training data. We demonstrate the effica...
Natural language processing algorithms (NLP) have become an essential approach for processing large ...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
Despite their promising performance across various natural language processing (NLP) tasks, current ...
We study an important task of attacking natural language processing models in a black box setting. W...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
Hard-label textual adversarial attack is a challenging task, as only the predicted label information...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
Generating adversarial examples for natural language is hard, as natural language consists of discre...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...
We consider the hard-label based black-box adversarial attack setting which solely observes the targ...
Machine learning systems have been shown to be vulnerable to adversarial examples. We study the most...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
Deep neural networks are vulnerable to adversarial examples in Natural Language Processing. However,...
Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it ha...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
Natural language processing algorithms (NLP) have become an essential approach for processing large ...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
Despite their promising performance across various natural language processing (NLP) tasks, current ...
We study an important task of attacking natural language processing models in a black box setting. W...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
Hard-label textual adversarial attack is a challenging task, as only the predicted label information...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
Generating adversarial examples for natural language is hard, as natural language consists of discre...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...
We consider the hard-label based black-box adversarial attack setting which solely observes the targ...
Machine learning systems have been shown to be vulnerable to adversarial examples. We study the most...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
Deep neural networks are vulnerable to adversarial examples in Natural Language Processing. However,...
Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it ha...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
Natural language processing algorithms (NLP) have become an essential approach for processing large ...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
Despite their promising performance across various natural language processing (NLP) tasks, current ...