We study the privacy implications of training recurrent neural networks (RNNs) with sensitive training datasets. Considering membership inference attacks (MIAs), which aim to infer whether or not specific data records have been used in training a given machine learning model, we provide empirical evidence that a neural network's architecture impacts its vulnerability to MIAs. In particular, we demonstrate that RNNs are subject to a higher attack accuracy than feed-forward neural network (FFNN) counterparts. Additionally, we study the effectiveness of two prominent mitigation methods for preempting MIAs, namely weight regularization and differential privacy. For the former, we empirically demonstrate that RNNs may only benefit from weight re...
Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN)...
Neural networks have become tremendously successful in recent times due to larger computing power a...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
We study the privacy risks that are associated with training a neural network's weights with self-su...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
From fraud detection to speech recognition, including price prediction, Machine Learning (ML) appli...
Recent years have witnessed a rapid development in machine learning systems and a widespread increas...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
Federated learning (FL) was originally regarded as a framework for collaborative learning among clie...
Recent Deep Learning (DL) advancements in solving complex real-world tasks have led to its widesprea...
Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN)...
Neural networks have become tremendously successful in recent times due to larger computing power a...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
We study the privacy risks that are associated with training a neural network's weights with self-su...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
From fraud detection to speech recognition, including price prediction, Machine Learning (ML) appli...
Recent years have witnessed a rapid development in machine learning systems and a widespread increas...
Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risk...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
Federated learning (FL) was originally regarded as a framework for collaborative learning among clie...
Recent Deep Learning (DL) advancements in solving complex real-world tasks have led to its widesprea...
Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN)...
Neural networks have become tremendously successful in recent times due to larger computing power a...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...