Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence ...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Deep neural networks have achieved remarkable performance in various applications but are extremely ...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Adversarial training has been considered an imperative component for safely deploying neural network...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
With the widespread use of machine learning, concerns over its security and reliability have become ...
Adversarial training (AT) and its variants have spearheaded progress in improving neural network rob...
Adversarial pruning compresses models while preserving robustness. Current methods require access to...
Current machine learning models achieve super-human performance in many real-world applications. Sti...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Deep neural networks have achieved remarkable performance in various applications but are extremely ...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Adversarial training has been considered an imperative component for safely deploying neural network...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
With the widespread use of machine learning, concerns over its security and reliability have become ...
Adversarial training (AT) and its variants have spearheaded progress in improving neural network rob...
Adversarial pruning compresses models while preserving robustness. Current methods require access to...
Current machine learning models achieve super-human performance in many real-world applications. Sti...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
In this paper, we introduce a novel neural network training framework that increases model's adversa...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Deep neural networks have achieved remarkable performance in various applications but are extremely ...