Training an ensemble of different sub-models has empirically proven to be an effective strategy to improve deep neural networks' adversarial robustness. Current ensemble training methods for image recognition usually encode the image labels by one-hot vectors, which neglect dependency relationships between the labels. Here we propose a novel adversarial ensemble training approach to jointly learn the label dependencies and the member models. Our approach adaptively exploits the learned label dependencies to promote the diversity of the member models. We test our approach on widely used datasets MNIST, FasionMNIST, and CIFAR-10. Results show that our approach is more robust against black-box attacks compared with the state-of-the-art methods...
Deep Learning has become interestingly popular in the field of computer vision, mostly attaining ne...
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain ...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mos...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortun...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
International audienceDespite their performance, Artificial Neural Networks are not reliable enough ...
Deep neural networks have been applied in computer vision recognition and achieved great performance...
Deep neural network ensembles hold the potential of improving generalization performance for complex...
Adversarial training is an effective learning technique to improve the robustness of deep neural net...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Due to numerous breakthroughs in real-world applications brought by machine intelligence, deep neura...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Deep Learning has become interestingly popular in the field of computer vision, mostly attaining ne...
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain ...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mos...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortun...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
International audienceDespite their performance, Artificial Neural Networks are not reliable enough ...
Deep neural networks have been applied in computer vision recognition and achieved great performance...
Deep neural network ensembles hold the potential of improving generalization performance for complex...
Adversarial training is an effective learning technique to improve the robustness of deep neural net...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Due to numerous breakthroughs in real-world applications brought by machine intelligence, deep neura...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Deep Learning has become interestingly popular in the field of computer vision, mostly attaining ne...
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain ...
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examp...