Adversarial robustness has become a central goal in deep learning, both in the theory and the practice. However, successful methods to improve the adversarial robustness (such as adversarial training) greatly hurt generalization performance on the unperturbed data. This could have a major impact on how the adversarial robustness affects real world systems (i.e. many may opt to forego robustness if it can improve accuracy on the unperturbed data). We propose Interpolated Adversarial Training, which employs recently proposed interpolation based training methods in the framework of adversarial training. On CIFAR-10, adversarial training increases the standard test error (when there is no adversary) from 4.43% to 12.32%, whereas with our Interp...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Deep neural networks are incredibly vulnerable to crafted, human-imperceptible adversarial perturbat...
In this paper I explore the relationship between boosting and neural networks. We see that our adap...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Deep neural networks have achieved remarkable performance in various applications but are extremely ...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
With the widespread use of machine learning, concerns over its security and reliability have become ...
Deep neural networks are easily attacked by imperceptible perturbation. Presently, adversarial train...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Adversarial robustness continues to be a major challenge for deep learning. A core issue is that rob...
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Deep neural networks are incredibly vulnerable to crafted, human-imperceptible adversarial perturbat...
In this paper I explore the relationship between boosting and neural networks. We see that our adap...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Deep neural networks have achieved remarkable performance in various applications but are extremely ...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
With the widespread use of machine learning, concerns over its security and reliability have become ...
Deep neural networks are easily attacked by imperceptible perturbation. Presently, adversarial train...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Adversarial robustness continues to be a major challenge for deep learning. A core issue is that rob...
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Deep neural networks are incredibly vulnerable to crafted, human-imperceptible adversarial perturbat...
In this paper I explore the relationship between boosting and neural networks. We see that our adap...