The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbations. These tend to be very difficult to interpret. Recent work that manipulates the latent representations of image generators to create "feature-level" adversarial perturbations gives us an opportunity to explore interpretable adversarial attacks. We make three contributions. First, we observe that feature-level attacks provide useful classes of inputs for studying the representations in models. Second, we show that these adversaries are versatile and highly robust. We demonstrate that they can be used to produce targeted, universal, disguised, physically-realizable, and black-box attacks at the ImageNet scale. Third, we show how these adver...
Methods for model explainability have become increasingly critical for testing the fairness and soun...
State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on si...
In spite of the successful application in many fields, machine learning models today suffer from not...
Researches have shown that deep neural networks are vulnerable to malicious attacks, where adversari...
The vulnerability of deep neural networks to adversarial attacks has been widely demonstrated (e.g.,...
In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up ...
Image classification systems are known to be vulnerable to adversarial attacks, which are impercepti...
Convolutional neural networks (CNNs) are widely used in computer vision, but can be deceived by care...
Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentiona...
Are foundation models secure from malicious actors? In this work, we focus on the image input to a v...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
In recent years, adversarial attack methods have been deceived rather easily on deep neural networks...
A growing body of work has shown that deep neural networks are susceptible to adversarial examples. ...
The existence of adversarial attacks on convolutional neural networks (CNN) questions the fitness of...
Methods for model explainability have become increasingly critical for testing the fairness and soun...
State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on si...
In spite of the successful application in many fields, machine learning models today suffer from not...
Researches have shown that deep neural networks are vulnerable to malicious attacks, where adversari...
The vulnerability of deep neural networks to adversarial attacks has been widely demonstrated (e.g.,...
In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up ...
Image classification systems are known to be vulnerable to adversarial attacks, which are impercepti...
Convolutional neural networks (CNNs) are widely used in computer vision, but can be deceived by care...
Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentiona...
Are foundation models secure from malicious actors? In this work, we focus on the image input to a v...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
In recent years, adversarial attack methods have been deceived rather easily on deep neural networks...
A growing body of work has shown that deep neural networks are susceptible to adversarial examples. ...
The existence of adversarial attacks on convolutional neural networks (CNN) questions the fitness of...
Methods for model explainability have become increasingly critical for testing the fairness and soun...
State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on si...
In spite of the successful application in many fields, machine learning models today suffer from not...