Driven by the need to solve increasingly complex optimization problems in signal processing and machine learning, recent years have seen rising interest in the behavior of gradient-descent based algorithms in non-convex environments. Most of the works on distributed non-convex optimization focus on the deterministic setting, where exact gradients are available at each agent. In this work, we consider stochastic cost functions, where exact gradients are replaced by stochastic approximations and the resulting gradient noise persistently seeps into the dynamics of the algorithm. We establish that the diffusion algorithm continues to yield meaningful estimates in these more challenging, non-convex environments, in the sense that (a) despite the...
We consider distributed multitask learning problems over a network of agents where each agent is int...
Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent ...
Abstract. In this paper we study the effect of stochastic errors on two constrained incremental sub-...
The first part of this dissertation considers distributed learning problems over networked agents. T...
The diffusion strategy for distributed learning from streaming data employs local stochastic gradien...
We study the consensus decentralized optimization problem where the objective function is the averag...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel...
© 2019 Massachusetts Institute of Technology. We analyze the effect of synchronization on distribute...
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate ov...
In this dissertation, we study optimization, adaptation, and learning problems over connected networ...
We consider networks of agents cooperating to minimize a global objective, modeled as the aggregate ...
International audienceThis article addresses a distributed optimization problem in a communication n...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
Abstract—We introduce a new framework for the convergence analysis of a class of distributed constra...
We consider distributed multitask learning problems over a network of agents where each agent is int...
Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent ...
Abstract. In this paper we study the effect of stochastic errors on two constrained incremental sub-...
The first part of this dissertation considers distributed learning problems over networked agents. T...
The diffusion strategy for distributed learning from streaming data employs local stochastic gradien...
We study the consensus decentralized optimization problem where the objective function is the averag...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel...
© 2019 Massachusetts Institute of Technology. We analyze the effect of synchronization on distribute...
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate ov...
In this dissertation, we study optimization, adaptation, and learning problems over connected networ...
We consider networks of agents cooperating to minimize a global objective, modeled as the aggregate ...
International audienceThis article addresses a distributed optimization problem in a communication n...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
Abstract—We introduce a new framework for the convergence analysis of a class of distributed constra...
We consider distributed multitask learning problems over a network of agents where each agent is int...
Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent ...
Abstract. In this paper we study the effect of stochastic errors on two constrained incremental sub-...