With the rapid development of deep neural networks (DNN), DNN-based face recognition technologies are also achieving great success and have been widely used in various applications requiring high-accuracy and robustness against attacks. However, deep neural networks are known to be vulnerable to adversarial attacks, performed using images added with well-designed perturbations. To enhance security of DNN-based face recognition, we need to know more about mechanisms of relate technologies. In this paper, we propose a feature-level supportive method, biasGAN, to improve the performance of universal adversarial attack methods. We insert this image to image translation preprocessor before conducting adversarial examples generation. BiasGAN will...
The quality of images produced by generative adversarial networks (GAN) is commonly a trade-off betw...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduce...
With the rapid development of deep neural networks (DNN), DNN-based face recognition technologies ar...
Deep neural network (DNN) architecture based models have high expressive power and learning capacity...
Images posted online present a privacy concern in that they may be used as reference examples for a ...
mages posted online present a privacy concern in that they may be used as reference examples for a f...
Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). ...
Image classification has undergone a revolution in recent years due to the high performance of new d...
In image classification of deep learning, adversarial examples where input is intended to add small ...
Albeit displaying remarkable performance across a range of tasks, Deep Neural Networks (DNNs) are hi...
Deep neural networks (DNN’s) have become essential for solving diverse complex problems and have ach...
Deepfakes pose severe threats of visual misinformation to our society. One representative deepfake a...
Image classification systems are known to be vulnerable to adversarial attacks, which are impercepti...
Convolutional neural networks (CNNs) are widely used in computer vision, but can be deceived by care...
The quality of images produced by generative adversarial networks (GAN) is commonly a trade-off betw...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduce...
With the rapid development of deep neural networks (DNN), DNN-based face recognition technologies ar...
Deep neural network (DNN) architecture based models have high expressive power and learning capacity...
Images posted online present a privacy concern in that they may be used as reference examples for a ...
mages posted online present a privacy concern in that they may be used as reference examples for a f...
Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). ...
Image classification has undergone a revolution in recent years due to the high performance of new d...
In image classification of deep learning, adversarial examples where input is intended to add small ...
Albeit displaying remarkable performance across a range of tasks, Deep Neural Networks (DNNs) are hi...
Deep neural networks (DNN’s) have become essential for solving diverse complex problems and have ach...
Deepfakes pose severe threats of visual misinformation to our society. One representative deepfake a...
Image classification systems are known to be vulnerable to adversarial attacks, which are impercepti...
Convolutional neural networks (CNNs) are widely used in computer vision, but can be deceived by care...
The quality of images produced by generative adversarial networks (GAN) is commonly a trade-off betw...
Deep Neural Networks (DNNs) have achieved great success in a wide range of applications, such as ima...
Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduce...