Recent development of adversarial attacks has proven that ensemble-based methods outperform traditional, non-ensemble ones in black-box attack. However, as it is computationally prohibitive to acquire a family of diverse models, these methods achieve inferior performance constrained by the limited number of models to be ensembled.In this paper, we propose Ghost Networks to improve the transferability of adversarial examples. The critical principle of ghost networks is to apply feature-level perturbations to an existing model to potentially create a huge set of diverse models. After that, models are subsequently fused by longitudinal ensemble. Extensive experimental results suggest that the number of networks is essential for improving the t...
Ensemble-based Adversarial Training is a principled approach to achieve robustness against adversari...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples wi...
Recent development of adversarial attacks has proven that ensemble-based methods outperform traditio...
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting. Several m...
An established way to improve the transferability of black-box evasion attacks is to craft the adver...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Machine learning models are now widely deployed in real-world applications. However, the existence o...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...
The problem of adversarial attacks to a black-box model when no queries are allowed has posed a grea...
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mos...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Ensemble-based adversarial training is a principled approach to achieve robustness against adversari...
As deep learning models have made remarkable strides in numerous fields, a variety of adversarial at...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Ensemble-based Adversarial Training is a principled approach to achieve robustness against adversari...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples wi...
Recent development of adversarial attacks has proven that ensemble-based methods outperform traditio...
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting. Several m...
An established way to improve the transferability of black-box evasion attacks is to craft the adver...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Machine learning models are now widely deployed in real-world applications. However, the existence o...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...
The problem of adversarial attacks to a black-box model when no queries are allowed has posed a grea...
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mos...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Ensemble-based adversarial training is a principled approach to achieve robustness against adversari...
As deep learning models have made remarkable strides in numerous fields, a variety of adversarial at...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Ensemble-based Adversarial Training is a principled approach to achieve robustness against adversari...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples wi...