Deep learning models are vulnerable to backdoor poisoning attacks. In particular, adversaries can embed hidden backdoors into a model by only modifying a very small portion of its training data. On the other hand, it has also been commonly observed that backdoor poisoning attacks tend to leave a tangible signature in the latent space of the backdoored model i.e. poison samples and clean samples form two separable clusters in the latent space. These observations give rise to the popularity of latent separability assumption, which states that the backdoored DNN models will learn separable latent representations for poison and clean populations. A number of popular defenses (e.g. Spectral Signature, Activation Clustering, SCAn, etc.) are exact...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
The growing dependence on machine learning in real-world applications emphasizes the importance of u...
In adversarial machine learning, new defenses against attacks on deep learning systems are routinely...
In this work, we study poison samples detection for defending against backdoor poisoning attacks on ...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Backdoor attack is a type of serious security threat to deep learning models.An adversary can provid...
Backdoors are powerful attacks against deep neural networks (DNNs). By poisoning training data, atta...
This electronic version was submitted by the student author. The certified thesis is available in th...
With the success of deep learning algorithms in various domains, studying adversarial attacks to sec...
As deep learning datasets grow larger and less curated, backdoor data poisoning attacks, which injec...
Deep neural networks (DNNs) are known to be vulnerable to both backdoor attacks as well as adversari...
The backdoor or Trojan attack is a severe threat to deep neural networks (DNNs). Researchers find th...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
We report a new neural backdoor attack, named Hibernated Backdoor, which is stealthy, aggressive and...
The data poisoning attack has raised serious security concerns on the safety of deep neural networks...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
The growing dependence on machine learning in real-world applications emphasizes the importance of u...
In adversarial machine learning, new defenses against attacks on deep learning systems are routinely...
In this work, we study poison samples detection for defending against backdoor poisoning attacks on ...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Backdoor attack is a type of serious security threat to deep learning models.An adversary can provid...
Backdoors are powerful attacks against deep neural networks (DNNs). By poisoning training data, atta...
This electronic version was submitted by the student author. The certified thesis is available in th...
With the success of deep learning algorithms in various domains, studying adversarial attacks to sec...
As deep learning datasets grow larger and less curated, backdoor data poisoning attacks, which injec...
Deep neural networks (DNNs) are known to be vulnerable to both backdoor attacks as well as adversari...
The backdoor or Trojan attack is a severe threat to deep neural networks (DNNs). Researchers find th...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
We report a new neural backdoor attack, named Hibernated Backdoor, which is stealthy, aggressive and...
The data poisoning attack has raised serious security concerns on the safety of deep neural networks...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
The growing dependence on machine learning in real-world applications emphasizes the importance of u...
In adversarial machine learning, new defenses against attacks on deep learning systems are routinely...