Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps. Being repetitive in nature during the inner maximization step, they take a huge time to train. We propose a non-iterative method that enforces the following ideas during training. Attribution maps are more aligned to the actual object in the image for adversarially robust models compared to naturally trained models. Also, the allowed set of pixels to perturb an image (that changes model decision) should be restricted to the object pixels only, which reduces the attack strength by limiting the attack space. Our method achieves significant performance gains with a...
With the widespread use of machine learning, concerns over its security and reliability have become ...
As technology and society grow increasingly dependent on computer vision, it becomes important to ma...
Model Zoo (PyTorch) of non-adversarially trained models for Robust Models are less Over-Confident (N...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
International audienceDespite their performance, Artificial Neural Networks are not reliable enough ...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
Adversarial training is an approach of increasing the robustness of models to adversarial attacks by...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
With the widespread use of machine learning, concerns over its security and reliability have become ...
As technology and society grow increasingly dependent on computer vision, it becomes important to ma...
Model Zoo (PyTorch) of non-adversarially trained models for Robust Models are less Over-Confident (N...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
International audienceDespite their performance, Artificial Neural Networks are not reliable enough ...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
Adversarial training is an approach of increasing the robustness of models to adversarial attacks by...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
With the widespread use of machine learning, concerns over its security and reliability have become ...
As technology and society grow increasingly dependent on computer vision, it becomes important to ma...
Model Zoo (PyTorch) of non-adversarially trained models for Robust Models are less Over-Confident (N...