Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risks for the training dataset used in the model training. In this paper, we propose a novel and effective Neuron-Guided Defense method named NeuGuard against membership inference attacks (MIAs). We identify a key weakness in existing defense mechanisms against MIAs wherein they cannot simultaneously defend against two commonly used neural network based MIAs, indicating that these two attacks should be separately evaluated to assure the defense effectiveness. We propose NeuGuard, a new defense approach that jointly controls the output and inner neurons' activation with the object to guide the model output of training set and testing set to have c...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
From fraud detection to speech recognition, including price prediction, Machine Learning (ML) appli...
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ub...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine ...
In a membership inference attack, an attacker aims to infer whether a data sample is in a target cla...
In Member Inference (MI) attacks, the adversary try to determine whether an instance is used to trai...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
We study the privacy implications of training recurrent neural networks (RNNs) with sensitive traini...
Machine learning (ML) algorithms require a massive amount of data. Firms such as Google and Facebook...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
From fraud detection to speech recognition, including price prediction, Machine Learning (ML) appli...
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ub...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine ...
In a membership inference attack, an attacker aims to infer whether a data sample is in a target cla...
In Member Inference (MI) attacks, the adversary try to determine whether an instance is used to trai...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy atta...
We study the privacy implications of training recurrent neural networks (RNNs) with sensitive traini...
Machine learning (ML) algorithms require a massive amount of data. Firms such as Google and Facebook...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...