We consider the problem of designing scalable, robust protocols for computing statistics about sensitive data. Specifically, we look at how best to design differentially private protocols in a distributed setting, where each user holds a private datum. The literature has mostly considered two models: the “central” model, in which a trusted server collects users’ data in the clear, which allows greater accuracy; and the “local” model, in which users individually randomize their data, and need not trust the server, but accuracy is limited. Attempts to achieve the accuracy of the central model without a trusted server have so far focused on variants of cryptographic multiparty computation (MPC), which limits scalability. In this paper, we i...
Data is considered the "new oil" in the information society and digital economy. While many commerci...
We consider a fully decentralized scenario in which no central trusted entity exists and all clients...
International audienceLearning from data owned by several parties, as in federated learning, raises ...
Differential privacy is often studied in one of two models. In the central model, a single analyzer ...
This work studies differential privacy in the context of the recently proposed shuffle model. Unlike...
© 2017 Kim Sasha RamchenA fundamental problem in large distributed systems is how to enable parties ...
How to achieve distributed differential privacy (DP) without a trusted central party is of great int...
We study the setup where each of n users holds an element from a discrete set, and the goal is to co...
We study a protocol for distributed computation called shuffled check-in, which achieves strong priv...
In this paper, we introduce the imperfect shuffle differential privacy model, where messages sent fr...
Distributed protocols allow a cryptographic scheme to distribute its operation among a group of part...
Statistical disclosure control (SDC) methods aim to protect privacy of the confidential information ...
Data is considered the “new oil” in the information society and digital economy. While many commerci...
Recently, it is shown that shuffling can amplify the central differential privacy guarantees of data...
Key-value data is a naturally occurring data type that has not been thoroughly investigated in the l...
Data is considered the "new oil" in the information society and digital economy. While many commerci...
We consider a fully decentralized scenario in which no central trusted entity exists and all clients...
International audienceLearning from data owned by several parties, as in federated learning, raises ...
Differential privacy is often studied in one of two models. In the central model, a single analyzer ...
This work studies differential privacy in the context of the recently proposed shuffle model. Unlike...
© 2017 Kim Sasha RamchenA fundamental problem in large distributed systems is how to enable parties ...
How to achieve distributed differential privacy (DP) without a trusted central party is of great int...
We study the setup where each of n users holds an element from a discrete set, and the goal is to co...
We study a protocol for distributed computation called shuffled check-in, which achieves strong priv...
In this paper, we introduce the imperfect shuffle differential privacy model, where messages sent fr...
Distributed protocols allow a cryptographic scheme to distribute its operation among a group of part...
Statistical disclosure control (SDC) methods aim to protect privacy of the confidential information ...
Data is considered the “new oil” in the information society and digital economy. While many commerci...
Recently, it is shown that shuffling can amplify the central differential privacy guarantees of data...
Key-value data is a naturally occurring data type that has not been thoroughly investigated in the l...
Data is considered the "new oil" in the information society and digital economy. While many commerci...
We consider a fully decentralized scenario in which no central trusted entity exists and all clients...
International audienceLearning from data owned by several parties, as in federated learning, raises ...