Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been deployed in various real-world applications. Meanwhile, DNN models have been shown to be vulnerable to security and privacy attacks. One such attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model's training set to mislead any input with an added secret trigger to a target class. In this paper, we perform a systematic investigation of the backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods which are highly effective, preserve model utility, and guarantee stealthiness. Specifically, we propose ...
We present a novel defense, against backdoor attacks on Deep Neural Networks (DNNs), wherein adversa...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provi...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely dep...
This electronic version was submitted by the student author. The certified thesis is available in th...
Natural language processing (NLP) models based on deep neural networks (DNNs) are vulnerable to back...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep neural networks (DNNs) and natural language processing (NLP) systems have developed rapidly and...
With the success of deep learning algorithms in various domains, studying adversarial attacks to sec...
Deep learning is becoming increasingly popular in real-life applications, especially in natural lang...
Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attack...
Deep Neural Networks are well known to be vulnerable to adversarial attacks and backdoor attacks, wh...
The recent development and expansion of the field of artificial intelligence has led to a significan...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
We present a novel defense, against backdoor attacks on Deep Neural Networks (DNNs), wherein adversa...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provi...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely dep...
This electronic version was submitted by the student author. The certified thesis is available in th...
Natural language processing (NLP) models based on deep neural networks (DNNs) are vulnerable to back...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep neural networks (DNNs) and natural language processing (NLP) systems have developed rapidly and...
With the success of deep learning algorithms in various domains, studying adversarial attacks to sec...
Deep learning is becoming increasingly popular in real-life applications, especially in natural lang...
Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attack...
Deep Neural Networks are well known to be vulnerable to adversarial attacks and backdoor attacks, wh...
The recent development and expansion of the field of artificial intelligence has led to a significan...
The backdoor attack has become an emerging threat for Natural Language Processing (NLP) systems. A v...
We present a novel defense, against backdoor attacks on Deep Neural Networks (DNNs), wherein adversa...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provi...