Previous works have proven the superior performance of ensemble-based black-box attacks on transferability. However, existing methods require significant difference in architecture among the source models to ensure gradient diversity. In this paper, we propose a Diverse Gradient Method (DGM), verifying that knowledge distillation is able to generate diverse gradients from unchangeable model architecture for boosting transferability. The core idea behind our DGM is to obtain transferable adversarial perturbations by fusing diverse gradients provided by a single source model and its distilled versions through an ensemble strategy. Experimental results show that DGM successfully crafts adversarial examples with higher transferability, only req...
Ensemble-based Adversarial Training is a principled approach to achieve robustness against adversari...
Transfer-based adversarial example is one of the most important classes of black-box attacks. Howeve...
Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but ...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mos...
Recent development of adversarial attacks has proven that ensemble-based methods outperform traditio...
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting. Several m...
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to ...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applicat...
Machine learning models are now widely deployed in real-world applications. However, the existence o...
We consider distributed (gradient descent-based) learning scenarios where the server combines the gr...
peer reviewedAn established way to improve the transferability of black-box evasion attacks is to cr...
Despite the widespread use of machine learning in adversarial settings such as computer security, re...
Ensemble-based adversarial training is a principled approach to achieve robustness against adversari...
Ensemble-based Adversarial Training is a principled approach to achieve robustness against adversari...
Transfer-based adversarial example is one of the most important classes of black-box attacks. Howeve...
Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but ...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mos...
Recent development of adversarial attacks has proven that ensemble-based methods outperform traditio...
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting. Several m...
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to ...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applicat...
Machine learning models are now widely deployed in real-world applications. However, the existence o...
We consider distributed (gradient descent-based) learning scenarios where the server combines the gr...
peer reviewedAn established way to improve the transferability of black-box evasion attacks is to cr...
Despite the widespread use of machine learning in adversarial settings such as computer security, re...
Ensemble-based adversarial training is a principled approach to achieve robustness against adversari...
Ensemble-based Adversarial Training is a principled approach to achieve robustness against adversari...
Transfer-based adversarial example is one of the most important classes of black-box attacks. Howeve...
Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but ...