Abstract—We investigate the convergence rate of the recently proposed subgradient-push method for distributed optimization over time-varying directed graphs. The subgradient-push method can be implemented in a dis-tributed way without requiring knowledge of either the number of agents or the graph sequence; each node is only required to know its out-degree at each time. Our main result is a convergence rate of O ((ln t)/t) for strongly convex functions with Lipschitz gradients even if only stochastic gradient samples are available; this is asymptotically faster than the O (ln t)
We study diffusion and consensus based optimization of a sum of unknown convex objective functions o...
<p>We consider distributed optimization in random networks where N nodes cooperatively minimize the ...
Abstract—We consider distributed optimization in random net-works where nodes cooperatively minimize...
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate ov...
In this article, we consider a distributed convex optimization problem over time-varying undirected ...
In this paper we consider a distributed convex optimization problem over time-varying undirected net...
<p>We study distributed optimization problems when N nodes minimize the sum of their individual cost...
In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) i...
International audienceWe study distributed stochastic gradient (D-SG) method and its accelerated var...
Abstract—We study distributed optimization problems when nodes minimize the sum of their individual ...
A lot of effort has been invested into characterizing the convergence rates of gradient based algori...
International audienceThis work proposes a theoretical analysis of distributed optimization of conve...
Abstract—We present a distributed proximal-gradient method for optimizing the average of convex func...
We consider distributed optimization where N nodes in a generic, connected network minimize the sum ...
Abstract—We devise a distributed asynchronous gradient-based algorithm to enable a network of comput...
We study diffusion and consensus based optimization of a sum of unknown convex objective functions o...
<p>We consider distributed optimization in random networks where N nodes cooperatively minimize the ...
Abstract—We consider distributed optimization in random net-works where nodes cooperatively minimize...
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate ov...
In this article, we consider a distributed convex optimization problem over time-varying undirected ...
In this paper we consider a distributed convex optimization problem over time-varying undirected net...
<p>We study distributed optimization problems when N nodes minimize the sum of their individual cost...
In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) i...
International audienceWe study distributed stochastic gradient (D-SG) method and its accelerated var...
Abstract—We study distributed optimization problems when nodes minimize the sum of their individual ...
A lot of effort has been invested into characterizing the convergence rates of gradient based algori...
International audienceThis work proposes a theoretical analysis of distributed optimization of conve...
Abstract—We present a distributed proximal-gradient method for optimizing the average of convex func...
We consider distributed optimization where N nodes in a generic, connected network minimize the sum ...
Abstract—We devise a distributed asynchronous gradient-based algorithm to enable a network of comput...
We study diffusion and consensus based optimization of a sum of unknown convex objective functions o...
<p>We consider distributed optimization in random networks where N nodes cooperatively minimize the ...
Abstract—We consider distributed optimization in random net-works where nodes cooperatively minimize...