In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up until now, this research focused on regular ML users use-cases such as debugging a ML model. This paper takes a different posture and show that adversaries can leverage explainable ML to bypass multi-feature types malware classifiers. Previous adversarial attacks against such classifiers only add new features and not modify existing ones to avoid harming the modified malware executable's functionality. Current attacks use a single algorithm that both selects which features to modify and modifies them blindly, treating all features the same. In this paper, we present a different approach. We split the adversarial example generation task into t...
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to ...
Machine learning classification models are vulnerable to adversarial examples -- effective input-spe...
Recent work has shown that deep-learning algorithms for malware detection are also susceptible to a...
Reliable deployment of machine learning models such as neural networks continues to be challenging d...
In recent years, machine learning (ML) models have been extensively used in software analytics, such...
The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbati...
In spite of the successful application in many fields, machine learning models today suffer from not...
Methods for model explainability have become increasingly critical for testing the fairness and soun...
Pattern recognition systems based on machine learning techniques are nowadays widely used in many di...
Signature-based malware detectors have proven to be insufficient as even a small change in malignant...
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures a...
Modern commercial antivirus systems increasingly rely on machine learning to keep up with the rampan...
Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the g...
Recent work has shown that deep-learning algorithms for malware detection are also susceptible to ad...
Machine learning models have been found to be vulnerable to adversarial attacks that apply small per...
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to ...
Machine learning classification models are vulnerable to adversarial examples -- effective input-spe...
Recent work has shown that deep-learning algorithms for malware detection are also susceptible to a...
Reliable deployment of machine learning models such as neural networks continues to be challenging d...
In recent years, machine learning (ML) models have been extensively used in software analytics, such...
The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbati...
In spite of the successful application in many fields, machine learning models today suffer from not...
Methods for model explainability have become increasingly critical for testing the fairness and soun...
Pattern recognition systems based on machine learning techniques are nowadays widely used in many di...
Signature-based malware detectors have proven to be insufficient as even a small change in malignant...
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures a...
Modern commercial antivirus systems increasingly rely on machine learning to keep up with the rampan...
Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the g...
Recent work has shown that deep-learning algorithms for malware detection are also susceptible to ad...
Machine learning models have been found to be vulnerable to adversarial attacks that apply small per...
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to ...
Machine learning classification models are vulnerable to adversarial examples -- effective input-spe...
Recent work has shown that deep-learning algorithms for malware detection are also susceptible to a...