Deep learning has achieved great successes in various types of applications over recent years. On the other hand, it has been found that deep neural networks (DNNs) can be easily fooled by adversarial input samples. This vulnerability raises major concerns in security-sensitive environments. Therefore, research in attacking and defending DNNs with adversarial examples has drawn great attention. The goal of this paper is to review the types of adversarial attacks and defenses, describe the state-of-the-art methods for each group, and compare their results. In addition, we present some of the top-scored competition submissions for Neural Information Processing Systems (NIPS) in 2017, their solution models, and demonstrate their results. This ...
Thesis (Ph.D.)--University of Washington, 2019Deep neural networks have achieved remarkable success ...
As modern technology is rapidly progressing, more applications are utilizing aspects of machine lear...
Abstract This article proposes a novel yet efficient defence method against adversarial attack(er)s ...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, ...
Deep neural networks (DNNs) have rapidly advanced the state of the art in many important, difficult ...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Prepared for: NAVAIRThe Navy and Department of Defense are prioritizing the rapid adoption of Artifi...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
Despite the impressive performances reported by deep neural networks in different application domain...
Despite the impressive performances reported by deep neural networks in different application domain...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Thesis (Ph.D.)--University of Washington, 2019Deep neural networks have achieved remarkable success ...
As modern technology is rapidly progressing, more applications are utilizing aspects of machine lear...
Abstract This article proposes a novel yet efficient defence method against adversarial attack(er)s ...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, ...
Deep neural networks (DNNs) have rapidly advanced the state of the art in many important, difficult ...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Prepared for: NAVAIRThe Navy and Department of Defense are prioritizing the rapid adoption of Artifi...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
Despite the impressive performances reported by deep neural networks in different application domain...
Despite the impressive performances reported by deep neural networks in different application domain...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Thesis (Ph.D.)--University of Washington, 2019Deep neural networks have achieved remarkable success ...
As modern technology is rapidly progressing, more applications are utilizing aspects of machine lear...
Abstract This article proposes a novel yet efficient defence method against adversarial attack(er)s ...