Deep neural networks (DNNs) have achieved superior performance in various prediction tasks, but can be very vulnerable to adversarial examples or perturbations. Therefore, it is crucial to measure the sensitivity of DNNs to various forms of perturbations in real applications. We introduce a novel perturbation manifold and its associated influence measure to quantify the effects of various perturbations on DNN classifiers. Such perturbations include various external and internal perturbations to input samples and network parameters. The proposed measure is motivated by information geometry and provides desirable invariance properties. We demonstrate that our influence measure is useful for four model building tasks: detecting potential ‘outl...
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that h...
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, ther...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples (AEs), whi...
We propose a novel approach to ranking Deep Learning (DL) hyper-parameters through the application o...
Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to ...
In general, Deep Neural Networks (DNNs) are evaluated by the generalization performance measured on ...
Attribution is the problem of finding which parts of an image are the most responsible for the outpu...
Classical sensitivity analysis of machine learning regression models is a topic sparse in literature...
In order for machine learning to be trusted in many applications, it is critical to be able to relia...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuou...
Deep neural networks have recently shown impressive classification performance on a diverse set of v...
The sensitivity of a neural network's output to its input perturbation is an important issue with bo...
The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In ...
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that h...
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, ther...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples (AEs), whi...
We propose a novel approach to ranking Deep Learning (DL) hyper-parameters through the application o...
Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to ...
In general, Deep Neural Networks (DNNs) are evaluated by the generalization performance measured on ...
Attribution is the problem of finding which parts of an image are the most responsible for the outpu...
Classical sensitivity analysis of machine learning regression models is a topic sparse in literature...
In order for machine learning to be trusted in many applications, it is critical to be able to relia...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuou...
Deep neural networks have recently shown impressive classification performance on a diverse set of v...
The sensitivity of a neural network's output to its input perturbation is an important issue with bo...
The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In ...
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that h...
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, ther...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...