Computer vision applications such as image classification and object detection often suffer from adversarial examples. For example, adding a small amount of noise to input images can trick the model into misclassification. Over the years, many defense mechanisms have been proposed, and different researchers have made seemingly contradictory claims on their effectiveness. This dissertation first presents an analysis of possible adversarial models and proposes an evaluation framework for comparing different more powerful and realistic adversary strategies. Then, this dissertation proposes two randomness-based defense mechanisms Random Spiking (RS) and MoNet to improve the robustness of image classifiers. Random Spiking generalizes dropout and...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
In recent years, neural networks have become the default choice for image classification and many ot...
Abstract. In many security applications a pattern recognition system faces an adversarial classifica...
Computer vision applications such as image classification and object detection often suffer from adv...
Computer vision applications such as image classification and object detection often suffer from adv...
Modern deep learning models for the computer vision domain are vulnerable against adversarial attack...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Recently, techniques have been developed to provably guarantee the robustness of a classifier to adv...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
We address the problem of data-driven image manipulation detection in the presence of an attacker wi...
Machine learning is increasingly used to make sense of our world in areas from spam detection, recom...
Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings s...
In many security applications a pattern recognition system faces an adversarial classification probl...
We investigate if the random feature selection approach proposed in [1] to improve the robustness of...
Recently, much attention in the literature has been given to adversarial examples\u27\u27, input da...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
In recent years, neural networks have become the default choice for image classification and many ot...
Abstract. In many security applications a pattern recognition system faces an adversarial classifica...
Computer vision applications such as image classification and object detection often suffer from adv...
Computer vision applications such as image classification and object detection often suffer from adv...
Modern deep learning models for the computer vision domain are vulnerable against adversarial attack...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Recently, techniques have been developed to provably guarantee the robustness of a classifier to adv...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
We address the problem of data-driven image manipulation detection in the presence of an attacker wi...
Machine learning is increasingly used to make sense of our world in areas from spam detection, recom...
Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings s...
In many security applications a pattern recognition system faces an adversarial classification probl...
We investigate if the random feature selection approach proposed in [1] to improve the robustness of...
Recently, much attention in the literature has been given to adversarial examples\u27\u27, input da...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
In recent years, neural networks have become the default choice for image classification and many ot...
Abstract. In many security applications a pattern recognition system faces an adversarial classifica...