In this paper we consider distributed optimization problems in which the cost function is separable (i.e., a sum of possibly non-smooth functions all sharing a common variable) and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose an asynchronous, distributed optimization algorithm over an undirected topology, based on a proximal gradient update on the dual problem. We show that by means of a proper choice of primal variables, the dual problem is separable and the dual variables can be stacked into separate blocks. This allows us to show that a distributed gossip update can be obtained by means of a randomiz...
10 pagesInternational audienceBased on the idea of randomized coordinate descent of $\alpha$-average...
We consider a distributed optimization problem over a multi-agent network, in which the sum of sever...
We consider a general class of convex optimization problems over time-varying, multi-agent networks,...
In this paper we consider distributed optimization problems in which the cost function is separab...
In this paper we consider distributed optimization problems in which the cost function is separable,...
In this paper we consider a distributed opti- mization scenario in which the aggregate objective fun...
This paper proposes TriPD, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differe...
In this paper, we consider a network of processors that want to cooperatively solve a large-scale, c...
This paper introduces a novel distributed algorithm over static directed graphs for solving big data...
In this paper, we consider a novel partitioned framework for distributed optimization in peer-to-pee...
The paper addresses large-scale, convex optimization problems that need to be solved in a distribute...
We consider a distributed optimization problem over a multi-agent network, in which the sum of sever...
10 pagesInternational audienceBased on the idea of randomized coordinate descent of $\alpha$-average...
We consider a distributed optimization problem over a multi-agent network, in which the sum of sever...
We consider a general class of convex optimization problems over time-varying, multi-agent networks,...
In this paper we consider distributed optimization problems in which the cost function is separab...
In this paper we consider distributed optimization problems in which the cost function is separable,...
In this paper we consider a distributed opti- mization scenario in which the aggregate objective fun...
This paper proposes TriPD, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differe...
In this paper, we consider a network of processors that want to cooperatively solve a large-scale, c...
This paper introduces a novel distributed algorithm over static directed graphs for solving big data...
In this paper, we consider a novel partitioned framework for distributed optimization in peer-to-pee...
The paper addresses large-scale, convex optimization problems that need to be solved in a distribute...
We consider a distributed optimization problem over a multi-agent network, in which the sum of sever...
10 pagesInternational audienceBased on the idea of randomized coordinate descent of $\alpha$-average...
We consider a distributed optimization problem over a multi-agent network, in which the sum of sever...
We consider a general class of convex optimization problems over time-varying, multi-agent networks,...