This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descent (D-SGD) method - a popular algorithm for distributed multi-agent machine learning. In this problem, each agent samples data points independently from a certain data-generating distribution. In the fault-free case, the D-SGD method allows all the agents to learn a mathematical model best fitting the data collectively sampled by all agents. We consider the case when a fraction of agents may be Byzantine faulty. Such faulty agents may not follow a prescribed algorithm correctly, and may render traditional D-SGD method ineffective by sharing arbitrary incorrect stochastic gradients. We propose a norm-based gradient-filter, named comparative gra...
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning ...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
While machine learning is going through an era of celebrated success, concerns have been raised abou...
Asynchronous distributed machine learning solutions have proven very effective so far, but always as...
This paper considers the problem of Byzantine fault-tolerance in distributed multi-agent optimizatio...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distribu...
A very common optimization technique in Machine Learning is Stochastic Gradient Descent (SGD). SGD c...
This paper studies the problem of distributed stochastic optimization in an adversarial setting wher...
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomen...
We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) di...
Byzantine resilience emerged as a prominent topic within the distributed machine learning community....
The present invention concerns computer-implemented methods for training a machine learning model us...
Byzantine-resilient Stochastic Gradient Descent (SGD) aims at shielding model training from Byzantin...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning ...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
While machine learning is going through an era of celebrated success, concerns have been raised abou...
Asynchronous distributed machine learning solutions have proven very effective so far, but always as...
This paper considers the problem of Byzantine fault-tolerance in distributed multi-agent optimizatio...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distribu...
A very common optimization technique in Machine Learning is Stochastic Gradient Descent (SGD). SGD c...
This paper studies the problem of distributed stochastic optimization in an adversarial setting wher...
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomen...
We present AGGREGATHOR, a framework that implements state-of-the-art robust (Byzantine-resilient) di...
Byzantine resilience emerged as a prominent topic within the distributed machine learning community....
The present invention concerns computer-implemented methods for training a machine learning model us...
Byzantine-resilient Stochastic Gradient Descent (SGD) aims at shielding model training from Byzantin...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning ...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
While machine learning is going through an era of celebrated success, concerns have been raised abou...