Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine learning. The two concepts have been extensively studied individually but the question of how to combine them effectively remains unanswered. This paper contributes to addressing this question by studying the extent to which the distributed SGD algorithm, in the standard parameter-server architecture, can learn an accurate model despite (a) a fraction of the workers being malicious (Byzantine), and (b) the other fraction, whilst being honest, providing noisy information to the server to ensure differential privacy (DP). We first observe that the integration of standard practices in DP and BR is not straightforward. In fact, we show that many...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
International audienceLearning from data owned by several parties, as in federated learning, raises ...
This paper proposes a locally differentially private federated learning algorithm for strongly conve...
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning ...
This paper aims at jointly addressing two seemly conflicting issues in federated learning: different...
We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distribu...
Byzantine resilience emerged as a prominent topic within the distributed machine learning community....
Asynchronous distributed machine learning solutions have proven very effective so far, but always as...
While machine learning is going through an era of celebrated success, concerns have been raised abou...
We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) di...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descen...
Privacy is a key concern in many distributed systems that are rich in personal data such as networks...
In recent years, there is an increasing interest in distributed machine learning. On one hand, distr...
In modern day machine learning applications such as self-driving cars, recommender systems, robotics...
Many areas of deep learning benefit from using increasingly larger neural networks trained on public...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
International audienceLearning from data owned by several parties, as in federated learning, raises ...
This paper proposes a locally differentially private federated learning algorithm for strongly conve...
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning ...
This paper aims at jointly addressing two seemly conflicting issues in federated learning: different...
We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distribu...
Byzantine resilience emerged as a prominent topic within the distributed machine learning community....
Asynchronous distributed machine learning solutions have proven very effective so far, but always as...
While machine learning is going through an era of celebrated success, concerns have been raised abou...
We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) di...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descen...
Privacy is a key concern in many distributed systems that are rich in personal data such as networks...
In recent years, there is an increasing interest in distributed machine learning. On one hand, distr...
In modern day machine learning applications such as self-driving cars, recommender systems, robotics...
Many areas of deep learning benefit from using increasingly larger neural networks trained on public...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
International audienceLearning from data owned by several parties, as in federated learning, raises ...
This paper proposes a locally differentially private federated learning algorithm for strongly conve...