Backdoor attack is a type of serious security threat to deep learning models.An adversary can provide users with a model trained on poisoned data to manipulate prediction behavior in test stage using a backdoor. The backdoored models behave normally on clean images, yet can be activated and output incorrect prediction if the input is stamped with a specific trigger pattern.Most existing backdoor attacks focus on manually defining imperceptible triggers in input space without considering the abnormality of triggers' latent representations in the poisoned model.These attacks are susceptible to backdoor detection algorithms and even visual inspection.In this paper, We propose a novel and stealthy backdoor attack - DEFEAT. It poisons the clean ...
With new applications made possible by the fusion of edge computing and artificial intelligence (AI)...
Recently, deep learning has made significant inroads into the Internet of Things due to its great po...
Deep learning models are vulnerable to backdoor poisoning attacks. In particular, adversaries can em...
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provi...
With the success of deep learning algorithms in various domains, studying adversarial attacks to sec...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presente...
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely dep...
This electronic version was submitted by the student author. The certified thesis is available in th...
The recent development and expansion of the field of artificial intelligence has led to a significan...
Backdoor attacks against CNNs represent a new threat against deep learning systems, due to the possi...
Large-scale unlabeled data has spurred recent progress in self-supervised learning methods that lear...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
With new applications made possible by the fusion of edge computing and artificial intelligence (AI)...
Recently, deep learning has made significant inroads into the Internet of Things due to its great po...
Deep learning models are vulnerable to backdoor poisoning attacks. In particular, adversaries can em...
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provi...
With the success of deep learning algorithms in various domains, studying adversarial attacks to sec...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presente...
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely dep...
This electronic version was submitted by the student author. The certified thesis is available in th...
The recent development and expansion of the field of artificial intelligence has led to a significan...
Backdoor attacks against CNNs represent a new threat against deep learning systems, due to the possi...
Large-scale unlabeled data has spurred recent progress in self-supervised learning methods that lear...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
With new applications made possible by the fusion of edge computing and artificial intelligence (AI)...
Recently, deep learning has made significant inroads into the Internet of Things due to its great po...
Deep learning models are vulnerable to backdoor poisoning attacks. In particular, adversaries can em...