NLP researchers propose different word-substitute black-box attacks that can fool text classification models. In such attack, an adversary keeps sending crafted adversarial queries to the target model until it can successfully achieve the intended outcome. State-of-the-art attack methods usually require hundreds or thousands of queries to find one adversarial example. In this paper, we study whether a sophisticated adversary can attack the system with much less queries. We propose a simple yet efficient method that can reduce the average number of adversarial queries by 3-30 times and maintain the attack effectiveness. This research highlights that an adversary can fool a deep NLP model with much less cost
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of ad...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adve...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
We study an important and challenging task of attacking natural language processing models in a hard...
We study an important task of attacking natural language processing models in a black box setting. W...
Hard-label textual adversarial attack is a challenging task, as only the predicted label information...
Deep Learning (DL) algorithms have shown wonders in many Natural Language Processing (NLP) tasks suc...
State-of-the-art deep NLP models have achieved impressive improvements on many tasks. However, they ...
Deep learning models have excelled in solving many problems in Natural Language Processing, but are ...
Recent work in black-box adversarial attacks for NLP systems has attracted much attention. Prior bla...
Deep neural networks have a wide range of applications in solving various real-world tasks and have ...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Machine learning systems have been shown to be vulnerable to adversarial examples. We study the most...
Deep learning models have been widely used in natural language processing tasks, yet researchers hav...
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of ad...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adve...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
We study an important and challenging task of attacking natural language processing models in a hard...
We study an important task of attacking natural language processing models in a black box setting. W...
Hard-label textual adversarial attack is a challenging task, as only the predicted label information...
Deep Learning (DL) algorithms have shown wonders in many Natural Language Processing (NLP) tasks suc...
State-of-the-art deep NLP models have achieved impressive improvements on many tasks. However, they ...
Deep learning models have excelled in solving many problems in Natural Language Processing, but are ...
Recent work in black-box adversarial attacks for NLP systems has attracted much attention. Prior bla...
Deep neural networks have a wide range of applications in solving various real-world tasks and have ...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Machine learning systems have been shown to be vulnerable to adversarial examples. We study the most...
Deep learning models have been widely used in natural language processing tasks, yet researchers hav...
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of ad...
© Springer Nature Switzerland AG 2020. Recently, generating adversarial examples has become an impor...
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adve...