Adversarial examples easily mislead vision systems based on deep neural networks (DNNs) trained with softmax cross entropy (SCE) loss. The vulnerability of DNN comes from the fact that SCE drives DNNs to fit on the training examples, whereas the resultant feature distributions between the training and adversarial examples are unfortunately misaligned. Several state-of-the-art methods start from improving the inter-class separability of training examples by modifying loss functions, where we argue that the adversarial examples are ignored, thus resulting in a limited robustness to adversarial attacks. In this paper, we exploited the inference region, which inspired us to apply margin-like inference information to SCE, resulting in a novel in...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
In recent years, adversarial attack methods have been deceived rather easily on deep neural networks...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...
The idea of robustness is central and critical to modern statistical analysis. However, despite the ...
Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in ...
Deep learning has shown outstanding performance in several applications including image classificati...
Deep neural networks have achieved impressive results in many image classification tasks. However, s...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
Deep neural networks are nowadays state-of-the-art method for many pattern recognition problems. As ...
Recent studies on the adversarial vulnerability of neural networks have shown that models trained wi...
Deep Learning has become interestingly popular in the field of computer vision, mostly attaining ne...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
In recent years, adversarial attack methods have been deceived rather easily on deep neural networks...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...
The idea of robustness is central and critical to modern statistical analysis. However, despite the ...
Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in ...
Deep learning has shown outstanding performance in several applications including image classificati...
Deep neural networks have achieved impressive results in many image classification tasks. However, s...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
Deep neural networks are nowadays state-of-the-art method for many pattern recognition problems. As ...
Recent studies on the adversarial vulnerability of neural networks have shown that models trained wi...
Deep Learning has become interestingly popular in the field of computer vision, mostly attaining ne...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
In recent years, adversarial attack methods have been deceived rather easily on deep neural networks...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...