Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings such as autonomous vehicles. While many countermeasures may look promising, only a few withstand rigorous evaluation. Defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al., 2019) on ImageNet. However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood. Their stochastic properties make evaluation more challenging and render many proposed attacks on deterministic models inapplicable. First, we show that the BPDA attack (Athalye et al., 2018a) used in BaRT's evaluation is ineffective and likely overestimates its robustness. We then attempt ...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
The paper presents a new defense against adversarial attacks for deep neural networks. We demonstrat...
Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings s...
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to ...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
International audienceDeep Neural Networks (DNNs) are robust against intra-class variability of imag...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
abstract: A defense-by-randomization framework is proposed as an effective defense mechanism against...
Modern image classification approaches often rely on deep neural networks, which have shown pronounc...
We investigate if the random feature selection approach proposed in [1] to improve the robustness of...
Computer vision applications such as image classification and object detection often suffer from adv...
Computer vision applications such as image classification and object detection often suffer from adv...
peer reviewedAn established way to improve the transferability of black-box evasion attacks is to cr...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
The paper presents a new defense against adversarial attacks for deep neural networks. We demonstrat...
Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings s...
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to ...
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neu...
International audienceDeep Neural Networks (DNNs) are robust against intra-class variability of imag...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
abstract: A defense-by-randomization framework is proposed as an effective defense mechanism against...
Modern image classification approaches often rely on deep neural networks, which have shown pronounc...
We investigate if the random feature selection approach proposed in [1] to improve the robustness of...
Computer vision applications such as image classification and object detection often suffer from adv...
Computer vision applications such as image classification and object detection often suffer from adv...
peer reviewedAn established way to improve the transferability of black-box evasion attacks is to cr...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
The paper presents a new defense against adversarial attacks for deep neural networks. We demonstrat...