Recent research has shown Deep Neural Networks (DNNs) to be vulnerable to adversarial examples that induce desired misclassifications in the models. Such risks impede the application of machine learning in security-sensitive domains. Several defense methods have been proposed against adversarial attacks to detect adversarial examples at test time or to make machine learning models more robust. However, while existing methods are quite effective under blackbox threat model, where the attacker is not aware of the defense, they are relatively ineffective under whitebox threat model, where the attacker has full knowledge of the defense. In this thesis, we propose ExAD, a framework to detect adversarial examples using an ensemble of explanati...
International audienceDespite the enormous performance of deep neural networks (DNNs), recent studie...
DeepNeuralNetworks (DNNs) are powerful to the classification tasks, finding the potential links bet...
In this paper, we present two different novel approaches to defend against adversarial examples in n...
Machine Learning algorithms provide astonishing performance in a wide range of tasks, including sens...
Despite the impressive performances reported by deep neural networks in different application domain...
State-of-the-art deep neural networks (DNNs) are highly effective in solving many complex real-world...
With the advancement of accelerated hardware in recent years, there has been a surge in the developm...
With intentional feature perturbations to a deep learning model, the adversary generates an adversar...
Convolutional Neural Networks (CNNs) have been at the frontier of the revolution within the field of...
Detecting adversarial examples currently stands as one of the biggest challenges in the field of dee...
Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack te...
Neural networks recently have been used to solve many real-world tasks such as image recognition and...
Deep learning technology achieves state of the art result in many computer vision missions. However,...
The robustness of neural networks is challenged by adversarial examples that contain almost impercep...
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, ...
International audienceDespite the enormous performance of deep neural networks (DNNs), recent studie...
DeepNeuralNetworks (DNNs) are powerful to the classification tasks, finding the potential links bet...
In this paper, we present two different novel approaches to defend against adversarial examples in n...
Machine Learning algorithms provide astonishing performance in a wide range of tasks, including sens...
Despite the impressive performances reported by deep neural networks in different application domain...
State-of-the-art deep neural networks (DNNs) are highly effective in solving many complex real-world...
With the advancement of accelerated hardware in recent years, there has been a surge in the developm...
With intentional feature perturbations to a deep learning model, the adversary generates an adversar...
Convolutional Neural Networks (CNNs) have been at the frontier of the revolution within the field of...
Detecting adversarial examples currently stands as one of the biggest challenges in the field of dee...
Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack te...
Neural networks recently have been used to solve many real-world tasks such as image recognition and...
Deep learning technology achieves state of the art result in many computer vision missions. However,...
The robustness of neural networks is challenged by adversarial examples that contain almost impercep...
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, ...
International audienceDespite the enormous performance of deep neural networks (DNNs), recent studie...
DeepNeuralNetworks (DNNs) are powerful to the classification tasks, finding the potential links bet...
In this paper, we present two different novel approaches to defend against adversarial examples in n...