This paper develops a fully distributed differentially-private learning algorithm based on the alternating direction method of multipliers (ADMM) to solve nonsmooth optimization problems. We employ an approximation of the augmented Lagrangian to handle nonsmooth objective functions. Furthermore, we perturb the primal update at each agent with a time-varying Gaussian noise with decreasing variance to provide zero-concentrated differential privacy. The developed algorithm has competitive privacy-accuracy trade-off and applies to nonsmooth and non necessarily strongly convex problems. Convergence and privacy-preserving properties are confirmed via both theoretical analysis and simulations
In this paper, we apply machine learning to distributed private data owned by multiple data owners, ...
In this paper, we introduce a new notion of guaranteed privacy that requires that the change of the ...
We study federated learning (FL)--especially cross-silo FL--with non-convex loss functions and data ...
We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk funct...
Due to its broad applicability in machine learning, resource allocation, and control, the alternatin...
International audienceWe study differentially private (DP) machine learning algorithms as instances ...
This paper proposes a locally differentially private federated learning algorithm for strongly conve...
As the modern world becomes increasingly digitized and interconnected, distributed systems have prov...
Abstract—We study a class of distributed convex constrained optimization problems where a group of a...
Collaborative learning has received huge interests due to its capability of exploiting the collectiv...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...
Motivated by the increasing concern about privacy in nowadays data-intensive online learning systems...
We deal with a general distributed constrained online learning problem with privacy over time-varyin...
Privacy issues and communication cost are both major concerns in distributed optimization in network...
The alternating direction method of multipliers (ADMM) has been recently recognized as a promising o...
In this paper, we apply machine learning to distributed private data owned by multiple data owners, ...
In this paper, we introduce a new notion of guaranteed privacy that requires that the change of the ...
We study federated learning (FL)--especially cross-silo FL--with non-convex loss functions and data ...
We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk funct...
Due to its broad applicability in machine learning, resource allocation, and control, the alternatin...
International audienceWe study differentially private (DP) machine learning algorithms as instances ...
This paper proposes a locally differentially private federated learning algorithm for strongly conve...
As the modern world becomes increasingly digitized and interconnected, distributed systems have prov...
Abstract—We study a class of distributed convex constrained optimization problems where a group of a...
Collaborative learning has received huge interests due to its capability of exploiting the collectiv...
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theor...
Motivated by the increasing concern about privacy in nowadays data-intensive online learning systems...
We deal with a general distributed constrained online learning problem with privacy over time-varyin...
Privacy issues and communication cost are both major concerns in distributed optimization in network...
The alternating direction method of multipliers (ADMM) has been recently recognized as a promising o...
In this paper, we apply machine learning to distributed private data owned by multiple data owners, ...
In this paper, we introduce a new notion of guaranteed privacy that requires that the change of the ...
We study federated learning (FL)--especially cross-silo FL--with non-convex loss functions and data ...