We propose a principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks. We formulate the training problem as a joint optimization problem with both empirical and provable robustness objectives and develop a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients. We perform both theoretical analysis on the convergence of the proposed technique and experimental comparison with state-of-the-arts. Results on MNIST and CIFAR-10 show that our method can consistently match or outperform prior approaches for provable l∞ robustness. Notably, we achieve 6.60% verified test error on MNIST at ε = 0.3, and 66.57% on CIFAR-10 with ε = 8/255
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial tra...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...
Recent years have witnessed the remarkable success of deep neural network (DNN) models spanning a wi...
Many of the successes of machine learning are based on minimizing an averaged loss function. However...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence area...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial tra...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...
Recent years have witnessed the remarkable success of deep neural network (DNN) models spanning a wi...
Many of the successes of machine learning are based on minimizing an averaged loss function. However...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence area...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...