In Cross-device Federated Learning, communication efficiency is of paramount importance. Sparse Ternary Compression (STC) is one of the most effective techniques for considerably reducing the per-round communication cost of Federated Learning (FL) without significantly degrading the accuracy of the global model, by using ternary quantization in series to topk sparsification. In this paper, we propose an original variant of STC that is specifically designed and implemented for convolutional layers. Our variant is originally based on the experimental evidence that a pattern exists in the distribution of client updates, namely, the difference between the received global model and the locally trained model. In particular, we have experimentally...
Federated learning enables cooperative training among massively distributed clients by sharing their...
The distributed training of deep learning models faces two issues: efficiency and privacy. First of ...
Federated Learning (FL) is a promising distributed method for edge-level machine learning, particula...
In Cross-device Federated Learning, communication efficiency is of paramount importance. Sparse Tern...
Xu J, Du W, Jin Y, He W, Cheng R. Ternary Compression for Communication-Efficient Federated Learning...
Federated learning (FL) enables multiple clients to collaboratively train a shared model, with the h...
In this paper, a new communication-efficient federated learning (FL) framework is proposed, inspired...
One main challenge in federated learning is the large communication cost of exchanging weight update...
Federated Learning (FL) is a privacy-preserving distributed deep learning paradigm that involves sub...
Due to the communication bottleneck in distributed and federated learning applications, algorithms u...
We present two novel federated learning (FL) schemes that mitigate the effect of straggling devices ...
Convolutional neural networks (CNNs) have taken the spotlight in a variety of machine learning appli...
Modern deep learning models are often trained in parallel over a collection of distributed machines ...
We present a novel coded federated learning (FL) scheme for linear regression that mitigates the eff...
Compressed communication, in the form of sparsification or quantization of stochastic gradients, is ...
Federated learning enables cooperative training among massively distributed clients by sharing their...
The distributed training of deep learning models faces two issues: efficiency and privacy. First of ...
Federated Learning (FL) is a promising distributed method for edge-level machine learning, particula...
In Cross-device Federated Learning, communication efficiency is of paramount importance. Sparse Tern...
Xu J, Du W, Jin Y, He W, Cheng R. Ternary Compression for Communication-Efficient Federated Learning...
Federated learning (FL) enables multiple clients to collaboratively train a shared model, with the h...
In this paper, a new communication-efficient federated learning (FL) framework is proposed, inspired...
One main challenge in federated learning is the large communication cost of exchanging weight update...
Federated Learning (FL) is a privacy-preserving distributed deep learning paradigm that involves sub...
Due to the communication bottleneck in distributed and federated learning applications, algorithms u...
We present two novel federated learning (FL) schemes that mitigate the effect of straggling devices ...
Convolutional neural networks (CNNs) have taken the spotlight in a variety of machine learning appli...
Modern deep learning models are often trained in parallel over a collection of distributed machines ...
We present a novel coded federated learning (FL) scheme for linear regression that mitigates the eff...
Compressed communication, in the form of sparsification or quantization of stochastic gradients, is ...
Federated learning enables cooperative training among massively distributed clients by sharing their...
The distributed training of deep learning models faces two issues: efficiency and privacy. First of ...
Federated Learning (FL) is a promising distributed method for edge-level machine learning, particula...