This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model. Existing defense mechanisms rely on model-specific heuristics or noise injection. While being able to mitigate attacks, existing methods significantly hinder model performance. There remains a question of how to design a defense mechanism that is applicable to a variety of models and achieves better utility-privacy tradeoff. In this paper, we propose the Mutual Information Regularization based Defense (MID) against MI attacks. The key idea is to limit the information about the model input contained in the prediction, t...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Privacy attacks targeting machine learning models are evolving. One of the primary goals of such att...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ub...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
The wide adoption of deep neural networks (DNNs) in mission-critical applications has spurred the ne...
Data privacy has emerged as an important issue as data-driven deep learning has been an essential co...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their p...
Distributed deep learning has potential for significant impact in preserving data privacy and improv...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Privacy attacks targeting machine learning models are evolving. One of the primary goals of such att...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ub...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
The wide adoption of deep neural networks (DNNs) in mission-critical applications has spurred the ne...
Data privacy has emerged as an important issue as data-driven deep learning has been an essential co...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Machine learning (ML) has become a core component of many real-world applications and training data ...
Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their p...
Distributed deep learning has potential for significant impact in preserving data privacy and improv...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Machine learning models are increasingly utilized across impactful domains to predict individual out...