Neural network pruning has been an essential technique to reduce the computation and memory requirements for using deep neural networks for resource-constrained devices. Most existing research focuses primarily on balancing the sparsity and accuracy of a pruned neural network by strategically removing insignificant parameters and retraining the pruned model. Such efforts on reusing training samples pose serious privacy risks due to increased memorization, which, however, has not been investigated yet. In this paper, we conduct the first analysis of privacy risks in neural network pruning. Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks. We first explore the im...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
International audienceMemorization of training data by deep neural networks enables an adversary to ...
Deep ensemble learning has been shown to improve accuracy by training multiple neural networks and a...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
We study the privacy implications of training recurrent neural networks (RNNs) with sensitive traini...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
Neural networks have become popular tools for many inference tasks nowadays. However, these networks...
Adversarial pruning compresses models while preserving robustness. Current methods require access to...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN)...
The usage of deep learning is being escalated in many applications. Due to its outstanding performan...
We study the privacy risks that are associated with training a neural network's weights with self-su...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
International audienceMemorization of training data by deep neural networks enables an adversary to ...
Deep ensemble learning has been shown to improve accuracy by training multiple neural networks and a...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
We study the privacy implications of training recurrent neural networks (RNNs) with sensitive traini...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
Neural networks have become popular tools for many inference tasks nowadays. However, these networks...
Adversarial pruning compresses models while preserving robustness. Current methods require access to...
From fraud detection to speech recognition, including price prediction,Machine Learning (ML) applica...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN)...
The usage of deep learning is being escalated in many applications. Due to its outstanding performan...
We study the privacy risks that are associated with training a neural network's weights with self-su...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
International audienceMemorization of training data by deep neural networks enables an adversary to ...
Deep ensemble learning has been shown to improve accuracy by training multiple neural networks and a...