Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST dat...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learnin...
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learnin...
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artific...
© 2021 Gregory Jeremiah KaranikasAs applications of deep learning continue to be discovered and impl...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
This thesis puts forward methods for computing local robustness of probabilistic neural networks, s...
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the fi...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
We study probabilistic safety for BayesianNeural Networks (BNNs) under adversarial in-put perturbati...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learnin...
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learnin...
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artific...
© 2021 Gregory Jeremiah KaranikasAs applications of deep learning continue to be discovered and impl...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
This thesis puts forward methods for computing local robustness of probabilistic neural networks, s...
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the fi...
Adversarial attacks and defenses are currently active areas of research for the deep learning commun...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
We study probabilistic safety for BayesianNeural Networks (BNNs) under adversarial in-put perturbati...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Recent advancements in the field of deep learning have substantially increased the adoption rate of ...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...