Transfer-based adversarial attacks can evaluate model robustness in the black-box setting. Several methods have demonstrated impressive untargeted transferability, however, it is still challenging to efficiently produce targeted transferability. To this end, we develop a simple yet effective framework to craft targeted transfer-based adversarial examples, applying a hierarchical generative network. In particular, we contribute to amortized designs that well adapt to multi-class targeted attacks. Extensive experiments on ImageNet show that our method improves the success rates of targeted black-box attacks by a significant margin over the existing methods -- it reaches an average success rate of 29.1\% against six diverse models based only o...
The problem of adversarial attacks to a black-box model when no queries are allowed has posed a grea...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Deep neural networks are vulnerable to adversarial examples, which attach human invisible perturbati...
Deep Neural Networks have been found vulnerable re-cently. A kind of well-designed inputs, which cal...
Recent development of adversarial attacks has proven that ensemble-based methods outperform traditio...
Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptib...
Transfer-based adversarial example is one of the most important classes of black-box attacks. Howeve...
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applicat...
Adversarial attacks provide a good way to study the robustness of deep learning models. One category...
Transferable adversarial attacks against Deep neural networks (DNNs) have received broad attention i...
An established way to improve the transferability of black-box evasion attacks is to craft the adver...
Existing transfer attack methods commonly assume that the attacker knows the training set (e.g., the...
We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the tran...
The adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due to the se...
Black-box attacks in deep reinforcement learning usually retrain substitute policies to mimic behavi...
The problem of adversarial attacks to a black-box model when no queries are allowed has posed a grea...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Deep neural networks are vulnerable to adversarial examples, which attach human invisible perturbati...
Deep Neural Networks have been found vulnerable re-cently. A kind of well-designed inputs, which cal...
Recent development of adversarial attacks has proven that ensemble-based methods outperform traditio...
Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptib...
Transfer-based adversarial example is one of the most important classes of black-box attacks. Howeve...
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applicat...
Adversarial attacks provide a good way to study the robustness of deep learning models. One category...
Transferable adversarial attacks against Deep neural networks (DNNs) have received broad attention i...
An established way to improve the transferability of black-box evasion attacks is to craft the adver...
Existing transfer attack methods commonly assume that the attacker knows the training set (e.g., the...
We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the tran...
The adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due to the se...
Black-box attacks in deep reinforcement learning usually retrain substitute policies to mimic behavi...
The problem of adversarial attacks to a black-box model when no queries are allowed has posed a grea...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Deep neural networks are vulnerable to adversarial examples, which attach human invisible perturbati...