Communication compression techniques are of growing interests for solving the decentralized optimization problem under limited communication, where the global objective is to minimize the average of local cost functions over a multi-agent network using only local computation and peer-to-peer communication. In this paper, we propose a novel compressed gradient tracking algorithm (C-GT) that combines gradient tracking technique with communication compression. In particular, C-GT is compatible with a general class of compression operators that unifies both unbiased and biased compressors. We show that C-GT inherits the advantages of gradient tracking-based algorithms and achieves linear convergence rate for strongly convex and smooth objective...
In the last few years, various communication compression techniques have emerged as an indispensable...
Distributed optimization increasingly plays a centralrole in economical and sustainable operation of...
This dissertation studies the performance and linear convergence properties of primal-dual methods...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Data-rich applications in machine-learning and control have motivated an intense research on large-s...
Data-rich applications in machine-learning and control have motivated an intense research on large-s...
Information compression is essential to reduce communication cost in distributed optimization over p...
We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMS...
Distributed optimization methods are often applied to solving huge-scale problems like training neur...
We consider a generic decentralized constrained optimization problem over static, directed communica...
We develop multi-step gradient methods for network-constrained optimization of strongly convex funct...
Coded distributed computation has become common practice for performing gradient descent on large da...
Decentralized optimization is a powerful paradigm that finds applications in engineering and learnin...
This paper considers distributed nonconvex optimization with the cost functions being distributed ov...
We study lossy gradient compression methods to alleviate the communication bottleneck in data-parall...
In the last few years, various communication compression techniques have emerged as an indispensable...
Distributed optimization increasingly plays a centralrole in economical and sustainable operation of...
This dissertation studies the performance and linear convergence properties of primal-dual methods...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Data-rich applications in machine-learning and control have motivated an intense research on large-s...
Data-rich applications in machine-learning and control have motivated an intense research on large-s...
Information compression is essential to reduce communication cost in distributed optimization over p...
We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMS...
Distributed optimization methods are often applied to solving huge-scale problems like training neur...
We consider a generic decentralized constrained optimization problem over static, directed communica...
We develop multi-step gradient methods for network-constrained optimization of strongly convex funct...
Coded distributed computation has become common practice for performing gradient descent on large da...
Decentralized optimization is a powerful paradigm that finds applications in engineering and learnin...
This paper considers distributed nonconvex optimization with the cost functions being distributed ov...
We study lossy gradient compression methods to alleviate the communication bottleneck in data-parall...
In the last few years, various communication compression techniques have emerged as an indispensable...
Distributed optimization increasingly plays a centralrole in economical and sustainable operation of...
This dissertation studies the performance and linear convergence properties of primal-dual methods...