After the discovery of adversarial examples and their adverse effects on deep learning models, many studies focused on finding more diverse methods to generate these carefully crafted samples. Although empirical results on the effectiveness of adversarial example generation methods against defense mecha- nisms are discussed in detail in the literature, an in-depth study of the theoretical properties and the per- turbation effectiveness of these adversarial attacks has largely been lacking. In this paper, we investigate the objective functions of three popular methods for adversarial example generation: the L-BFGS attack, the Iterative Fast Gradient Sign attack, and Carlini & Wagner’s attack. Specifically, we perform a com- parative and form...
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks ...
Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention fro...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
Recently, much attention in the literature has been given to adversarial examples\u27\u27, input da...
Convolutional neural networks have outperformed humans in image recognition tasks, but they remain v...
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the fi...
As modern technology is rapidly progressing, more applications are utilizing aspects of machine lear...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
In recent years, deep neural networks have demonstrated outstanding performance in many machine lear...
As machine learning is being integrated into more and more systems, such as autonomous vehicles or m...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...
Convolutional neural networks (CNNs) have proved their efficiency in performing image classification...
A number of online services nowadays rely upon machine learning to extract valuable information from...
Deep neural networks are vulnerable to adversarial attacks. Most white-box attacks are based on the ...
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks ...
Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention fro...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
Recently, much attention in the literature has been given to adversarial examples\u27\u27, input da...
Convolutional neural networks have outperformed humans in image recognition tasks, but they remain v...
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the fi...
As modern technology is rapidly progressing, more applications are utilizing aspects of machine lear...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
In recent years, deep neural networks have demonstrated outstanding performance in many machine lear...
As machine learning is being integrated into more and more systems, such as autonomous vehicles or m...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...
Convolutional neural networks (CNNs) have proved their efficiency in performing image classification...
A number of online services nowadays rely upon machine learning to extract valuable information from...
Deep neural networks are vulnerable to adversarial attacks. Most white-box attacks are based on the ...
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks ...
Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention fro...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...