We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We prove that, under certain conditions, even a “naïve” LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies, leading to co...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...
Accepted to Third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22)International...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ub...
In a membership inference attack, an attacker aims to infer whether a data sample is in a target cla...
Nowadays Machine Learning models have been employed in many domains due to their extremely good perf...
Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face r...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Privacy attacks targeting machine learning models are evolving. One of the primary goals of such att...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...
Accepted to Third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22)International...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ub...
In a membership inference attack, an attacker aims to infer whether a data sample is in a target cla...
Nowadays Machine Learning models have been employed in many domains due to their extremely good perf...
Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face r...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Privacy attacks targeting machine learning models are evolving. One of the primary goals of such att...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...