Attribution is the problem of finding which parts of an image are the most responsible for the output of a deep neural network. An important family of attribution methods is based on measuring the effect of perturbations applied to the input image, either via exhaustive search or by finding representative perturbations via optimization. In this paper, we discuss some of the shortcomings of existing approaches to perturbation analysis and address them by introducing the concept of extremal perturbations, which are theoretically grounded and interpretable. We also introduce a number of technical innovations to compute these extremal perturbations, including a new area constraint and a parametric family of smooth perturbations, which allow us ...
In the past five years, deep learning methods have become state-of-the-art in solving various invers...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
The clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and...
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuou...
We develop new theoretical results on matrix perturbation to shed light on the impact of architectur...
Saliency methods are widely used to visually explain 'black-box' deep learning model outputs to huma...
In the past decade, deep learning has fueled a number of exciting developments in artificial intelli...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
Although neural networks perform very well on the image classification task, they are still vulnerab...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image...
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, ther...
We develop new theoretical results on matrix perturbation to shed light on the impact of architectur...
In the past five years, deep learning methods have become state-of-the-art in solving various invers...
In the past five years, deep learning methods have become state-of-the-art in solving various invers...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
The clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and...
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuou...
We develop new theoretical results on matrix perturbation to shed light on the impact of architectur...
Saliency methods are widely used to visually explain 'black-box' deep learning model outputs to huma...
In the past decade, deep learning has fueled a number of exciting developments in artificial intelli...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
Although neural networks perform very well on the image classification task, they are still vulnerab...
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—miscl...
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image...
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, ther...
We develop new theoretical results on matrix perturbation to shed light on the impact of architectur...
In the past five years, deep learning methods have become state-of-the-art in solving various invers...
In the past five years, deep learning methods have become state-of-the-art in solving various invers...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
The clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and...