Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios where a malicious user may inject manipulated instances. In this work, we focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at inference time. The attacker aims at finding a perturbation of an instance that changes the model outcome.We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset. Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker. We apply the proposed strategy to decision tree ensembles, and we also propose an approximate certification method f...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable...
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable...
Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios ...
Despite its success and popularity, machine learning is now recognized as vulnerable to evasion atta...
Verifying the robustness of machine learning models against evasion attacks at test time is an impor...
Recently it has been shown that many machine learning models are vulnerable to adversarial examples:...
Machine learning is used for security purposes, to differ between the benign and the malicious. Wher...
© 2019 by the Author(S). Although adversarial examples and model robustness have been extensively st...
Normal decision trees are effective but simple machine learning models that are prone to adversarial...
Adversarial training is a prominent approach to make machine learning (ML) models resilient to adver...
Decision trees are a popular choice of explainable model, but just like neural networks, they suffer...
We study the problem of formally and automatically verifying robustness properties of decision tree ...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
We study the problem of formally and automatically verifying robustness properties of decision tree ...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable...
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable...
Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios ...
Despite its success and popularity, machine learning is now recognized as vulnerable to evasion atta...
Verifying the robustness of machine learning models against evasion attacks at test time is an impor...
Recently it has been shown that many machine learning models are vulnerable to adversarial examples:...
Machine learning is used for security purposes, to differ between the benign and the malicious. Wher...
© 2019 by the Author(S). Although adversarial examples and model robustness have been extensively st...
Normal decision trees are effective but simple machine learning models that are prone to adversarial...
Adversarial training is a prominent approach to make machine learning (ML) models resilient to adver...
Decision trees are a popular choice of explainable model, but just like neural networks, they suffer...
We study the problem of formally and automatically verifying robustness properties of decision tree ...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
We study the problem of formally and automatically verifying robustness properties of decision tree ...
Adversarial training and its variants have become the standard defense against adversarial attacks -...
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable...
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable...