Decentralized learning offers privacy and communication efficiency when data are naturally distributed among agents communicating over an underlying graph. Motivated by overparameterized learning settings, in which models are trained to zero training loss, we study algorithmic and generalization properties of decentralized learning with gradient descent on separable data. Specifically, for decentralized gradient descent (DGD) and a variety of loss functions that asymptote to zero at infinity (including exponential and logistic losses), we derive novel finite-time generalization bounds. This complements a long line of recent work that studies the generalization performance and the implicit bias of gradient descent over separable data, but ha...
This paper considers the problem of decentralized, personalized federated learning. For centralized ...
We present a federated learning framework that is designed to robustly deliver good predictive perfo...
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this...
Decentralized learning over distributed datasets can have significantly different data distributions...
We study the consensus decentralized optimization problem where the objective function is the averag...
The generalization ability often determines the success of machine learning algorithms in practice. ...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent de...
In this paper, we consider decentralized optimization problems where agents have individual cost fun...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
The distributed training of deep learning models faces two issues: efficiency and privacy. First of ...
Decentralized distributed learning is the key to enabling large-scale machine learning (training) on...
This paper considers the problem of decentralized, personalized federated learning. For centralized ...
We present a federated learning framework that is designed to robustly deliver good predictive perfo...
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this...
Decentralized learning over distributed datasets can have significantly different data distributions...
We study the consensus decentralized optimization problem where the objective function is the averag...
The generalization ability often determines the success of machine learning algorithms in practice. ...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent de...
In this paper, we consider decentralized optimization problems where agents have individual cost fun...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
The distributed training of deep learning models faces two issues: efficiency and privacy. First of ...
Decentralized distributed learning is the key to enabling large-scale machine learning (training) on...
This paper considers the problem of decentralized, personalized federated learning. For centralized ...
We present a federated learning framework that is designed to robustly deliver good predictive perfo...
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this...