As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models. Existing works evidence strong connection between the distinguishability of the training and testing loss distributions and the model's vulnerability to MIAs. Motivated by existing results, we propose a novel training framework based on a relaxed loss (RelaxLoss) with a more achievable learning target, which leads to narrowed generalization gap and reduced privacy leakage. RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead. Through extensive evaluations on five datasets with diverse modalities (images, medical data, transaction records)...
A library for running membership inference attacks (MIA) against machine learning models. Check out ...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...
Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face r...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
A large body of research has shown that machine learning models are vulnerable to membership inferen...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
We introduce a new class of attacks on machine learning models. We show that an adversary who can po...
A library for running membership inference attacks (MIA) against machine learning models. Check out ...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022The r...
Machine learning (ML) has been widely adopted in various privacy-critical applications, e.g., face r...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
A large body of research has shown that machine learning models are vulnerable to membership inferen...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
We introduce a new class of attacks on machine learning models. We show that an adversary who can po...
A library for running membership inference attacks (MIA) against machine learning models. Check out ...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...