Decentralized learning over distributed datasets can have significantly different data distributions across the agents. The current state-of-the-art decentralized algorithms mostly assume the data distributions to be Independent and Identically Distributed. This paper focuses on improving decentralized learning over non-IID data. We propose \textit{Neighborhood Gradient Clustering (NGC)}, a novel decentralized learning algorithm that modifies the local gradients of each agent using self- and cross-gradient information. Cross-gradients for a pair of neighboring agents are the derivatives of the model parameters of an agent with respect to the dataset of the other agent. In particular, the proposed method replaces the local gradients of the m...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
The convergence speed of machine learning models trained with Federated Learning is significantly af...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
Decentralized learning offers privacy and communication efficiency when data are naturally distribut...
We study the consensus decentralized optimization problem where the objective function is the averag...
In recent centralized nonconvex distributed learning and federated learning, local methods are one o...
Federated Learning is a machine learning paradigm where we aim to train machine learning models in a...
Decentralized distributed learning is the key to enabling large-scale machine learning (training) on...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) algorithm to solve distribute...
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn ca...
In this paper we consider online distributed learning problems. Online distributed learning refers t...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
The convergence speed of machine learning models trained with Federated Learning is significantly af...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
Decentralized learning offers privacy and communication efficiency when data are naturally distribut...
We study the consensus decentralized optimization problem where the objective function is the averag...
In recent centralized nonconvex distributed learning and federated learning, local methods are one o...
Federated Learning is a machine learning paradigm where we aim to train machine learning models in a...
Decentralized distributed learning is the key to enabling large-scale machine learning (training) on...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) algorithm to solve distribute...
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn ca...
In this paper we consider online distributed learning problems. Online distributed learning refers t...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
The convergence speed of machine learning models trained with Federated Learning is significantly af...