In spite of the successful application in many fields, machine learning models today suffer from notorious problems like vulnerability to adversarial examples. Beyond falling into the cat-and-mouse game between adversarial attack and defense, this paper provides alternative perspective to consider adversarial example and explore whether we can exploit it in benign applications. We first attribute adversarial example to the human-model disparity on employing non-semantic features. While largely ignored in classical machine learning mechanisms, non-semantic feature enjoys three interesting characteristics as (1) exclusive to model, (2) critical to affect inference, and (3) utilizable as features. Inspired by this, we present brave new idea of...
In this paper, we study the adversarial examples existence and adversarial training from the standpo...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures a...
Reliable deployment of machine learning models such as neural networks continues to be challenging d...
In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up ...
The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbati...
Pattern recognition systems based on machine learning techniques are nowadays widely used in many di...
Adversarial machine learning manipulates datasets to mislead machine learning algorithm decisions. W...
Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, rec...
Machine learning has become an important component for many systems and applications including compu...
Adversarial examples are inputs to a machine learning system that result in an incorrect output from...
Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning too...
While a substantial body of prior work has explored adversarial example generation for natural langu...
© 2019 Neural information processing systems foundation. All rights reserved. Adversarial examples h...
Deep learning technology achieves state of the art result in many computer vision missions. However,...
In this paper, we study the adversarial examples existence and adversarial training from the standpo...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures a...
Reliable deployment of machine learning models such as neural networks continues to be challenging d...
In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up ...
The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbati...
Pattern recognition systems based on machine learning techniques are nowadays widely used in many di...
Adversarial machine learning manipulates datasets to mislead machine learning algorithm decisions. W...
Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, rec...
Machine learning has become an important component for many systems and applications including compu...
Adversarial examples are inputs to a machine learning system that result in an incorrect output from...
Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning too...
While a substantial body of prior work has explored adversarial example generation for natural langu...
© 2019 Neural information processing systems foundation. All rights reserved. Adversarial examples h...
Deep learning technology achieves state of the art result in many computer vision missions. However,...
In this paper, we study the adversarial examples existence and adversarial training from the standpo...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures a...