In this paper, we study the problem of differen-tially private risk minimization where the goal is to provide differentially private algorithms that have small excess risk. In particular we address the following open problem: Is it possible to de-sign computationally efficient differentially pri-vate risk minimizers with excess risk bounds that do not explicitly depend on dimensionality (p) and do not require structural assumptions like re-stricted strong convexity? In this paper, we answer the question in the af-firmative for a variant of the well-known output and objective perturbation algorithms (Chaud-huri et al., 2011). In particular, we show that un-der certain assumptions, variants of both output and objective perturbation algorithms...
Differentially private learning tackles tasks where the data are private and the learning process is...
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...
Differential privacy is concerned about the prediction quality while measuring the privacy impact on...
In this paper, we initiate a systematic investigation of differentially private algorithms for conve...
Abstract Convex risk minimization is a commonly used setting in learning theory. In t...
In this paper, we study the Empirical Risk Minimization (ERM) problem in the non-interactive Local ...
In this paper, we study the Differentially Private Empirical Risk Minimization (DP-ERM) problem with...
Machine learning models can leak information about the data used to train them. To mitigate this iss...
This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/cli...
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in wh...
International audienceIn this paper, we study differentially private empirical risk minimization (DP...
We study the problem of $(\epsilon,\delta)$-differentially private learning of linear predictors wit...
Motivated by the increasing concern about privacy in nowadays data-intensive online learning systems...
This work studies the problem of privacy-preserving classification – namely, learning a classifier f...
Differentially private learning tackles tasks where the data are private and the learning process is...
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...
Differential privacy is concerned about the prediction quality while measuring the privacy impact on...
In this paper, we initiate a systematic investigation of differentially private algorithms for conve...
Abstract Convex risk minimization is a commonly used setting in learning theory. In t...
In this paper, we study the Empirical Risk Minimization (ERM) problem in the non-interactive Local ...
In this paper, we study the Differentially Private Empirical Risk Minimization (DP-ERM) problem with...
Machine learning models can leak information about the data used to train them. To mitigate this iss...
This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/cli...
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in wh...
International audienceIn this paper, we study differentially private empirical risk minimization (DP...
We study the problem of $(\epsilon,\delta)$-differentially private learning of linear predictors wit...
Motivated by the increasing concern about privacy in nowadays data-intensive online learning systems...
This work studies the problem of privacy-preserving classification – namely, learning a classifier f...
Differentially private learning tackles tasks where the data are private and the learning process is...
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...