In this paper, we initiate a systematic investigation of differentially private algorithms for convex empirical risk minimization. Various instantiations of this problem have been studied before. We pro-vide new algorithms and matching lower bounds for private ERM assuming only that each data point’s contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (, 0)-...
We study stochastic convex optimization with heavy-tailed data under the constraint of differential ...
This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/cli...
Machine learning models can leak information about the data used to train them. To mitigate this iss...
In this paper, we study the Differentially Private Empirical Risk Minimization (DP-ERM) problem with...
In this paper, we study the problem of differen-tially private risk minimization where the goal is t...
In this paper, we study the Empirical Risk Minimization (ERM) problem in the non-interactive Local ...
A wide variety of fundamental data analyses in machine learning, such as linear and logistic regress...
Differential privacy is concerned about the prediction quality while measuring the privacy impact on...
In this paper, we study private optimization problems for non-smooth convex functions $F(x)=\mathbb{...
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in wh...
Differential privacy is the now de facto industry standard for ensuring privacy while publicly relea...
Abstract Convex risk minimization is a commonly used setting in learning theory. In t...
While many solutions for privacy-preserving convex empirical risk minimization (ERM) have been devel...
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the...
Producing statistics that respect the privacy of the samples while still maintaining their accuracy ...
We study stochastic convex optimization with heavy-tailed data under the constraint of differential ...
This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/cli...
Machine learning models can leak information about the data used to train them. To mitigate this iss...
In this paper, we study the Differentially Private Empirical Risk Minimization (DP-ERM) problem with...
In this paper, we study the problem of differen-tially private risk minimization where the goal is t...
In this paper, we study the Empirical Risk Minimization (ERM) problem in the non-interactive Local ...
A wide variety of fundamental data analyses in machine learning, such as linear and logistic regress...
Differential privacy is concerned about the prediction quality while measuring the privacy impact on...
In this paper, we study private optimization problems for non-smooth convex functions $F(x)=\mathbb{...
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in wh...
Differential privacy is the now de facto industry standard for ensuring privacy while publicly relea...
Abstract Convex risk minimization is a commonly used setting in learning theory. In t...
While many solutions for privacy-preserving convex empirical risk minimization (ERM) have been devel...
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the...
Producing statistics that respect the privacy of the samples while still maintaining their accuracy ...
We study stochastic convex optimization with heavy-tailed data under the constraint of differential ...
This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/cli...
Machine learning models can leak information about the data used to train them. To mitigate this iss...