International audienceAdversarial examples of deep neural networks are receiving ever increasing attention because they help in understanding and reducing the sensitivity to their input. This is natural given the increasing applications of deep neural networks in our everyday lives. When white-box attacks are almost always successful, it is typically only the distortion of the perturbations that matters in their evaluation. In this work, we argue that speed is important as well, especially when considering that fast attacks are required by adversarial training. Given more time, iterative methods can always find better solutions. We investigate this speed-distortion trade-off in some depth and introduce a new attack called boundary projectio...
Detecting adversarial examples currently stands as one of the biggest challenges in the field of dee...
The idea of robustness is central and critical to modern statistical analysis. However, despite the ...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
International audienceAdversarial examples of deep neural networks are receiving ever increasing att...
This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve th...
This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve th...
International audienceThis paper investigates the visual quality of the adversarial examples. Recent...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
Deep neural networks (DNNs) have become a powerful tool for image classification tasks in recent yea...
Deep neural networks (DNNs) have recently led to significant improvement in many areas of machine le...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the fi...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
International audienceAdversarial attacks represent a threat to every deep neural network. They are ...
Detecting adversarial examples currently stands as one of the biggest challenges in the field of dee...
The idea of robustness is central and critical to modern statistical analysis. However, despite the ...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
International audienceAdversarial examples of deep neural networks are receiving ever increasing att...
This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve th...
This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve th...
International audienceThis paper investigates the visual quality of the adversarial examples. Recent...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
Deep neural networks (DNNs) have become a powerful tool for image classification tasks in recent yea...
Deep neural networks (DNNs) have recently led to significant improvement in many areas of machine le...
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbati...
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the fi...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
International audienceAdversarial attacks represent a threat to every deep neural network. They are ...
Detecting adversarial examples currently stands as one of the biggest challenges in the field of dee...
The idea of robustness is central and critical to modern statistical analysis. However, despite the ...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...