Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep learning based vision systems are widely deployed in today's world. The backbones of these syst...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
Machine learning models are susceptible to adversarial perturbations: small changes to input that ca...
Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to ...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Deep neural networks (DNNs) serve as a backbone of many image, language and speech processing system...
Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentiona...
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturba...
Adversarial attacks deceive deep neural network models by adding imperceptibly small but well-design...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Deep Neural Networks have been found vulnerable re-cently. A kind of well-designed inputs, which cal...
Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but ...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep learning based vision systems are widely deployed in today's world. The backbones of these syst...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
Machine learning models are susceptible to adversarial perturbations: small changes to input that ca...
Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to ...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Deep neural networks (DNNs) serve as a backbone of many image, language and speech processing system...
Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentiona...
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturba...
Adversarial attacks deceive deep neural network models by adding imperceptibly small but well-design...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Deep Neural Networks have been found vulnerable re-cently. A kind of well-designed inputs, which cal...
Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but ...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep learning based vision systems are widely deployed in today's world. The backbones of these syst...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...