Deep ensemble learning has been shown to improve accuracy by training multiple neural networks and averaging their outputs. Ensemble learning has also been suggested to defend against membership inference attacks that undermine privacy. In this paper, we empirically demonstrate a trade-off between these two goals, namely accuracy and privacy (in terms of membership inference attacks), in deep ensembles. Using a wide range of datasets and model architectures, we show that the effectiveness of membership inference attacks increases when ensembling improves accuracy. We analyze the impact of various factors in deep ensembles and demonstrate the root cause of the trade-off. Then, we evaluate common defenses against membership inference attacks ...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
While significant research advances have been made in the field of deep reinforcement learning, ther...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
Differential Privacy (DP) is the de facto standard for reasoning about the privacy guarantees of a t...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Deep Learning (DL) has become increasingly popular in recent years. While DL models can achieve high...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
We present two information leakage attacks that outperform previous work on membership inference aga...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
Deep Learning based Side-Channel Attacks (DL-SCA) are considered as fundamental threats against secu...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
While significant research advances have been made in the field of deep reinforcement learning, ther...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
Differential Privacy (DP) is the de facto standard for reasoning about the privacy guarantees of a t...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
Machine learning models are commonly trained on sensitive and personal data such as pictures, medica...
Deep Learning (DL) has become increasingly popular in recent years. While DL models can achieve high...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
We present two information leakage attacks that outperform previous work on membership inference aga...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Neural network pruning has been an essential technique to reduce the computation and memory requirem...
Deep Learning based Side-Channel Attacks (DL-SCA) are considered as fundamental threats against secu...
How much does a machine learning algorithm leak about its training data, and why? Membership inferen...
Abstract Machine learning has become an integral part of modern intelligent systems in all aspects o...
While significant research advances have been made in the field of deep reinforcement learning, ther...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...