We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated setting with communication constraints. Several workers (randomly sampled) perform the optimization process using a central server to aggregate their computations. To alleviate the communication cost, Artemis allows to compress the information sent in both directions (from the workers to the server and conversely) combined with a memory mechanism. It improves on existing algorithms that only consider unidirectional compression (to the server), or use very strong assumptions on the compression operator. We provide fast rates of convergence (linear up to a threshold) under weak assumptions on the stochastic gradients (noise's variance bounded ...
We consider distributed optimization over several devices, each sending incremental model updates to...
In the last few years, various communication compression techniques have emerged as an indispensable...
In this paper, we investigate the impact of compression on stochastic gradient algorithms for machin...
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated...
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
We develop a new approach to tackle communication constraints in a distributed learning problem with...
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
International audienceIn this paper, we investigate the impact of compression on stochastic gradient...
In the modern paradigm of federated learning, a large number of users are involved in a global learn...
Training a large-scale model over a massive data set is an extremely computation and storage intensi...
We consider distributed optimization over several devices, each sending incremental model updates to...
In the last few years, various communication compression techniques have emerged as an indispensable...
In this paper, we investigate the impact of compression on stochastic gradient algorithms for machin...
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated...
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
We develop a new approach to tackle communication constraints in a distributed learning problem with...
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
International audienceIn this paper, we investigate the impact of compression on stochastic gradient...
In the modern paradigm of federated learning, a large number of users are involved in a global learn...
Training a large-scale model over a massive data set is an extremely computation and storage intensi...
We consider distributed optimization over several devices, each sending incremental model updates to...
In the last few years, various communication compression techniques have emerged as an indispensable...
In this paper, we investigate the impact of compression on stochastic gradient algorithms for machin...