In this article, we consider a distributed convex optimization problem over time-varying undirected networks. We propose a dual method, primarily averaged network dual ascent (PANDA), that is proven to converge R-linearly to the optimal point given that the agents' objective functions are strongly convex and have Lipschitz continuous gradients. Like dual decomposition, PANDA requires half the amount of variable exchanges per iterate of methods based on DIGing, and can provide with practical improved performance as empirically demonstrated.Not duplicate with Diva 1320108, QC 20220517</p
We introduce primal and dual stochastic gradient oracle methods for distributed convex optimization ...
17 pagesInternational audienceIn this work, we consider the distributed optimization of non-smooth c...
Abstract—We devise a distributed asynchronous gradient-based algorithm to enable a network of comput...
In this article, we consider a distributed convex optimization problem over time-varying undirected ...
In this paper we consider a distributed convex optimization problem over time-varying undirected net...
We design and analyze a fully distributed algorithm for convex constrained optimization in networks ...
This paper proposes a novel class of distributed continuous-time coordination algorithms to solve ne...
Abstract—We investigate the convergence rate of the recently proposed subgradient-push method for di...
In this paper we introduce a novel algorithmic framework for non-convex distributed optimization in ...
In recent years, significant progress has been made in the field of distributed optimization algorit...
International audienceThis work proposes a theoretical analysis of distributed optimization of conve...
We consider a general class of convex optimization problems over time-varying, multi-agent networks,...
summary:Recently, distributed convex optimization has received much attention by many researchers. C...
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed...
Abstract—We present a distributed proximal-gradient method for optimizing the average of convex func...
We introduce primal and dual stochastic gradient oracle methods for distributed convex optimization ...
17 pagesInternational audienceIn this work, we consider the distributed optimization of non-smooth c...
Abstract—We devise a distributed asynchronous gradient-based algorithm to enable a network of comput...
In this article, we consider a distributed convex optimization problem over time-varying undirected ...
In this paper we consider a distributed convex optimization problem over time-varying undirected net...
We design and analyze a fully distributed algorithm for convex constrained optimization in networks ...
This paper proposes a novel class of distributed continuous-time coordination algorithms to solve ne...
Abstract—We investigate the convergence rate of the recently proposed subgradient-push method for di...
In this paper we introduce a novel algorithmic framework for non-convex distributed optimization in ...
In recent years, significant progress has been made in the field of distributed optimization algorit...
International audienceThis work proposes a theoretical analysis of distributed optimization of conve...
We consider a general class of convex optimization problems over time-varying, multi-agent networks,...
summary:Recently, distributed convex optimization has received much attention by many researchers. C...
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed...
Abstract—We present a distributed proximal-gradient method for optimizing the average of convex func...
We introduce primal and dual stochastic gradient oracle methods for distributed convex optimization ...
17 pagesInternational audienceIn this work, we consider the distributed optimization of non-smooth c...
Abstract—We devise a distributed asynchronous gradient-based algorithm to enable a network of comput...