Training large neural networks with meaningful/usable differential privacy security guarantees is a demanding challenge. In this paper, we tackle this problem by revisiting the two key operations in Differentially Private Stochastic Gradient Descent (DP-SGD): 1) iterative perturbation and 2) gradient clipping. We propose a generic optimization framework, called {\em ModelMix}, which performs random aggregation of intermediate model states. It strengthens the composite privacy analysis utilizing the entropy of the training trajectory and improves the $(\epsilon, \delta)$ DP security parameters by an order of magnitude. We provide rigorous analyses for both the utility guarantees and privacy amplification of ModelMix. In particular, we pres...
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in...
International audienceMachine learning models can leak information about the data used to train them...
Using machine learning to improve health care has gained popularity. However, most research in machi...
Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation...
While modern machine learning models rely on increasingly large training datasets, data is often lim...
Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent ad...
Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in ...
Leveraging transfer learning has recently been shown to be an effective strategy for training large ...
Differentially private stochastic gradient descent (DP-SGD) has been widely adopted in deep learning...
Existing approaches for training neural networks with user-level differential privacy (e.g., DP Fede...
Per-example gradient clipping is a key algorithmic step that enables practical differential private ...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...
Because learning sometimes involves sensitive data, machine learning algorithms have been extended t...
State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) face...
Training even moderately-sized generative models with differentially-private stochastic gradient des...
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in...
International audienceMachine learning models can leak information about the data used to train them...
Using machine learning to improve health care has gained popularity. However, most research in machi...
Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation...
While modern machine learning models rely on increasingly large training datasets, data is often lim...
Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent ad...
Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in ...
Leveraging transfer learning has recently been shown to be an effective strategy for training large ...
Differentially private stochastic gradient descent (DP-SGD) has been widely adopted in deep learning...
Existing approaches for training neural networks with user-level differential privacy (e.g., DP Fede...
Per-example gradient clipping is a key algorithmic step that enables practical differential private ...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...
Because learning sometimes involves sensitive data, machine learning algorithms have been extended t...
State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) face...
Training even moderately-sized generative models with differentially-private stochastic gradient des...
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in...
International audienceMachine learning models can leak information about the data used to train them...
Using machine learning to improve health care has gained popularity. However, most research in machi...