This entry accommodates the main paper "Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness", NIPS BDL Workshop 2021, and its pyTorch-based code implementation. Abstract: This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial attacks; we especially focus on Adversarial Training settings. In our work, we replace the conventional ReLU-based nonlinearities with blocks comprising locally and stochastically competing linear units. The output of each network layer now yields a sparse output, depending on the outcome of winner sampling in each block. We rely on the Variational B...
We propose a principled framework that combines adversarial training and provable robustness verific...
Standard adversarial attacks change the predicted class label of a selected image by adding speciall...
Recent works show that random neural networks are vulnerable against adversarial attacks [Daniely an...
This work explores the potency of stochastic competition-based activations, namely Stochastic Local ...
This work addresses adversarial robustness in deep learning by considering deep networks with stoch...
This entry accommodates the main paper "Local Competition and Stochasticity for Adversarial Robustne...
This work aims to address the long-established problem of learning diversified representations. To t...
International audienceThis paper introduces stochastic sparse adversarial attacks (SSAA), standing a...
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations t...
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-tak...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
This post entails the code and few-shot benchmarks Omniglot and Mini-Imagenet, addressed for our sub...
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger me...
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to ...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...
We propose a principled framework that combines adversarial training and provable robustness verific...
Standard adversarial attacks change the predicted class label of a selected image by adding speciall...
Recent works show that random neural networks are vulnerable against adversarial attacks [Daniely an...
This work explores the potency of stochastic competition-based activations, namely Stochastic Local ...
This work addresses adversarial robustness in deep learning by considering deep networks with stoch...
This entry accommodates the main paper "Local Competition and Stochasticity for Adversarial Robustne...
This work aims to address the long-established problem of learning diversified representations. To t...
International audienceThis paper introduces stochastic sparse adversarial attacks (SSAA), standing a...
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations t...
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-tak...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
This post entails the code and few-shot benchmarks Omniglot and Mini-Imagenet, addressed for our sub...
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger me...
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to ...
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward...
We propose a principled framework that combines adversarial training and provable robustness verific...
Standard adversarial attacks change the predicted class label of a selected image by adding speciall...
Recent works show that random neural networks are vulnerable against adversarial attacks [Daniely an...