Linear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrect labels for input data that are adversarially modified with small perturbations. However, this phenomenon has not been properly understood in the context of sketch-based linear classifiers, typically used in memory-constrained paradigms, which rely on random projections of the features for model compression. In this paper, we propose novel Fast-Gradient-Sign Method (FGSM) attacks for sketched classifiers in full, partial, and black-box information settings with regards to their internal parameters. We perform extensive experiments on the MNIST dataset to characterize their robustness as a function of perturbation budget. Our results suggest ...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Recently, techniques have been developed to provably guarantee the robustness of a classifier to adv...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
Linear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrec...
Machine-learning techniques are widely used in securityrelated applications, like spam and malware d...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
After the discovery of adversarial examples and their adverse effects on deep learning models, many ...
Modern machine learning algorithms are able to reach an astonishingly high level of performance in ...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite the widespread use of machine learning in adversarial settings such as computer security, re...
Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversar...
Abstract. In adversarial classification tasks like spam filtering, intru-sion detection in computer ...
Machine learning algorithms are invented to learn from data and to use data to perform predictions a...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Recently, techniques have been developed to provably guarantee the robustness of a classifier to adv...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
Linear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrec...
Machine-learning techniques are widely used in securityrelated applications, like spam and malware d...
International audienceThis paper investigates the theory of robustness against adversarial attacks. ...
After the discovery of adversarial examples and their adverse effects on deep learning models, many ...
Modern machine learning algorithms are able to reach an astonishingly high level of performance in ...
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and ...
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturb...
Despite the widespread use of machine learning in adversarial settings such as computer security, re...
Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversar...
Abstract. In adversarial classification tasks like spam filtering, intru-sion detection in computer ...
Machine learning algorithms are invented to learn from data and to use data to perform predictions a...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is a...
Recently, techniques have been developed to provably guarantee the robustness of a classifier to adv...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...