As in-the-wild data are increasingly involved in the training stage, machine learning applications become more susceptible to data poisoning attacks. Such attacks typically lead to test-time accuracy degradation or controlled misprediction. In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples. To this end, we demonstrate a set of data poisoning attacks to amplify the membership exposure of the targeted class. We first propose a generic dirty-label attack for supervised classification algorithms. We then propose an optimization-based clean-label attack in the transfer learning scenario, whereby the poisoning samples are correctly labeled and look "...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Federated learning (FL) was originally regarded as a framework for collaborative learning among clie...
International audienceRecently, it has been shown that Machine Learning models can leak sensitive in...
As in-the-wild data are increasingly involved in the training stage, machine learning applications b...
In adversarial machine learning, new defenses against attacks on deep learning systems are routinely...
The success of machine learning is fueled by the increasing availability of computing power and larg...
We introduce a new class of attacks on machine learning models. We show that an adversary who can po...
Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face r...
Many machine learning systems rely on data collected in the wild from untrusted sources, exposing th...
Machine learning has become an important component for many systems and applications including compu...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
As deep learning datasets grow larger and less curated, backdoor data poisoning attacks, which injec...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...
Research in adversarial machine learning has shown how the performance of machine learning models ca...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Federated learning (FL) was originally regarded as a framework for collaborative learning among clie...
International audienceRecently, it has been shown that Machine Learning models can leak sensitive in...
As in-the-wild data are increasingly involved in the training stage, machine learning applications b...
In adversarial machine learning, new defenses against attacks on deep learning systems are routinely...
The success of machine learning is fueled by the increasing availability of computing power and larg...
We introduce a new class of attacks on machine learning models. We show that an adversary who can po...
Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face r...
Many machine learning systems rely on data collected in the wild from untrusted sources, exposing th...
Machine learning has become an important component for many systems and applications including compu...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
As deep learning datasets grow larger and less curated, backdoor data poisoning attacks, which injec...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...
Research in adversarial machine learning has shown how the performance of machine learning models ca...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Federated learning (FL) was originally regarded as a framework for collaborative learning among clie...
International audienceRecently, it has been shown that Machine Learning models can leak sensitive in...