We propose graph-dependent implicit regularisation strategies for distributed stochastic subgradient descent (Distributed SGD) for convex problems in multi-agent learning. Under the standard assumptions of convexity, Lipschitz continuity, and smoothness, we establish statistical learning rates that retain, up to logarithmic terms, centralised statistical guarantees through implicit regularisation (step size tuning and early stopping) with appropriate dependence on the graph topology. Our approach avoids the need for explicit regularisation in decentralised learning problems, such as adding constraints to the empirical risk minimisation rule. Particularly for distributed methods, the use of implicit regularisation allows the algorithm to rem...
The generalization ability often determines the success of machine learning algorithms in practice. ...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD),...
The first part of this dissertation considers distributed learning problems over networked agents. T...
International audienceThis letter proposes a general regularization framework for inference over mul...
This letter proposes a general regularization framework for inference over multitask networks. The o...
We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel...
Abstract. In this paper we study the effect of stochastic errors on two constrained incremental sub-...
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex o...
Distributed convex optimization refers to the task of minimizing the aggregate sum of convex risk fu...
We study the consensus decentralized optimization problem where the objective function is the averag...
The stability and generalization of stochastic gradient-based methods provide valuable insights into...
We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent de...
This work presents and studies a distributed algorithm for solving optimization problems over networ...
Abstract In this paper, a new structure for coopera-tive learning automata called extended learning ...
The generalization ability often determines the success of machine learning algorithms in practice. ...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD),...
The first part of this dissertation considers distributed learning problems over networked agents. T...
International audienceThis letter proposes a general regularization framework for inference over mul...
This letter proposes a general regularization framework for inference over multitask networks. The o...
We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel...
Abstract. In this paper we study the effect of stochastic errors on two constrained incremental sub-...
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex o...
Distributed convex optimization refers to the task of minimizing the aggregate sum of convex risk fu...
We study the consensus decentralized optimization problem where the objective function is the averag...
The stability and generalization of stochastic gradient-based methods provide valuable insights into...
We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent de...
This work presents and studies a distributed algorithm for solving optimization problems over networ...
Abstract In this paper, a new structure for coopera-tive learning automata called extended learning ...
The generalization ability often determines the success of machine learning algorithms in practice. ...
We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solvin...
We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD),...