Federated learning enables a large amount of edge computing devices to learn a model without data sharing jointly. As a leading algorithm in this setting, Federated Average FedAvg, which runs Stochastic Gradient Descent (SGD) in parallel on local devices and averages the sequences only once in a while, have been widely used due to their simplicity and low communication cost. However, despite recent research efforts, it lacks theoretical analysis under assumptions beyond smoothness. In this paper, we analyze the convergence of FedAvg. Different from the existing work, we relax the assumption of strong smoothness. More specifically, we assume the semi-smoothness and semi-Lipschitz properties for the loss function, which have an additional fir...
Federated learning (FL) has become de facto framework for collaborative learning among edge devices ...
Federated learning has shown its advances over the last few years but is facing many challenges, suc...
Federated Learning has been recently proposed for distributed model training at the edge. The princi...
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this...
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a globa...
Federated learning (FL) learns a model jointly from a set of participating devices without sharing e...
Existing theory predicts that data heterogeneity will degrade the performance of the Federated Avera...
Federated learning (FL) aims to minimize the communication complexity of training a model over heter...
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to it...
The FedProx algorithm is a simple yet powerful distributed proximal point optimization method widely...
Federated learning (FL) faces challenges of intermittent client availability and computation/communi...
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a...
We present a federated learning framework that is designed to robustly deliver good predictive perfo...
In Federated Learning (FL), a number of clients or devices collaborate to train a model without shar...
Over-the-air computation is a communication-efficient solution for federated learning (FL). In such ...
Federated learning (FL) has become de facto framework for collaborative learning among edge devices ...
Federated learning has shown its advances over the last few years but is facing many challenges, suc...
Federated Learning has been recently proposed for distributed model training at the edge. The princi...
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this...
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a globa...
Federated learning (FL) learns a model jointly from a set of participating devices without sharing e...
Existing theory predicts that data heterogeneity will degrade the performance of the Federated Avera...
Federated learning (FL) aims to minimize the communication complexity of training a model over heter...
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to it...
The FedProx algorithm is a simple yet powerful distributed proximal point optimization method widely...
Federated learning (FL) faces challenges of intermittent client availability and computation/communi...
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a...
We present a federated learning framework that is designed to robustly deliver good predictive perfo...
In Federated Learning (FL), a number of clients or devices collaborate to train a model without shar...
Over-the-air computation is a communication-efficient solution for federated learning (FL). In such ...
Federated learning (FL) has become de facto framework for collaborative learning among edge devices ...
Federated learning has shown its advances over the last few years but is facing many challenges, suc...
Federated Learning has been recently proposed for distributed model training at the edge. The princi...