Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger memory footprint. In this paper, we introduce an approach to obtain robust yet compact models by pruning randomly-initialized binary networks. Unlike adversarial training, which learns the model parameters, we initialize the model parameters as either +1 or -1, keep them fixed, and find a subnetwork structure that is robust to attacks. Our method confirms the Strong Lottery Ticket Hypothesis in the presence of adversarial attacks, and extends this to binary networks. Furthermore, it yields more compact networks with competitive performance than existing works by 1) adaptively pruning different network layers; 2) exploiting an effective binary i...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger me...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Recent works show that random neural networks are vulnerable against adversarial attacks [Daniely an...
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to ...
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations t...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
This entry accommodates the main paper "Stochastic Local Winner-Takes-All Networks Enable Profound A...
Neural networks are known to be highly sensitive to adversarial examples. These may arise due to dif...
Adversarial pruning compresses models while preserving robustness. Current methods require access to...
Model Zoo (PyTorch) of non-adversarially trained models for Robust Models are less Over-Confident (N...
Despite significant advances, deep networks remain highly susceptible to adversarial attack. One fun...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger me...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Recent works show that random neural networks are vulnerable against adversarial attacks [Daniely an...
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to ...
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations t...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
This entry accommodates the main paper "Stochastic Local Winner-Takes-All Networks Enable Profound A...
Neural networks are known to be highly sensitive to adversarial examples. These may arise due to dif...
Adversarial pruning compresses models while preserving robustness. Current methods require access to...
Model Zoo (PyTorch) of non-adversarially trained models for Robust Models are less Over-Confident (N...
Despite significant advances, deep networks remain highly susceptible to adversarial attack. One fun...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...