We analyse the learning performance of Distributed Gradient Descent in the context of multi-agent decentralised non-parametric regression with the square loss function when i.i.d. samples are assigned to agents. We show that if agents hold sufficiently many samples with respect to the network size, then Distributed Gradient Descent achieves optimal statistical rates with a number of iterations that scales, up to a threshold, with the inverse of the spectral gap of the gossip matrix divided by the number of samples owned by each agent raised to a problem-dependent power. The presence of the threshold comes from statistics. It encodes the existence of a "big data" regime where the number of required iterations does not depend on the network t...
Decentralized learning over distributed datasets can have significantly different data distributions...
Decentralized learning offers privacy and communication efficiency when data are naturally distribut...
Abstract—We consider distributed optimization in random net-works where nodes cooperatively minimize...
Machine learning models are often trained on data stored across multiple computers connected by a ne...
International audienceWe consider decentralized online supervised learning where estimators are chos...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
The first part of this dissertation considers distributed learning problems over networked agents. T...
We study the consensus decentralized optimization problem where the objective function is the averag...
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed...
<p>We consider distributed optimization in random networks where N nodes cooperatively minimize the ...
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate ov...
International audienceWe consider stochastic optimization problems defined over reproducing kernel H...
Machine learning models can deal with data samples scattered among distributed agents, each of which...
We investigate the performance of distributed learning for large-scale linear regression where the m...
We derive a nonparametric training algorithm which asymptotically achieves the minimum possible erro...
Decentralized learning over distributed datasets can have significantly different data distributions...
Decentralized learning offers privacy and communication efficiency when data are naturally distribut...
Abstract—We consider distributed optimization in random net-works where nodes cooperatively minimize...
Machine learning models are often trained on data stored across multiple computers connected by a ne...
International audienceWe consider decentralized online supervised learning where estimators are chos...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
The first part of this dissertation considers distributed learning problems over networked agents. T...
We study the consensus decentralized optimization problem where the objective function is the averag...
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed...
<p>We consider distributed optimization in random networks where N nodes cooperatively minimize the ...
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate ov...
International audienceWe consider stochastic optimization problems defined over reproducing kernel H...
Machine learning models can deal with data samples scattered among distributed agents, each of which...
We investigate the performance of distributed learning for large-scale linear regression where the m...
We derive a nonparametric training algorithm which asymptotically achieves the minimum possible erro...
Decentralized learning over distributed datasets can have significantly different data distributions...
Decentralized learning offers privacy and communication efficiency when data are naturally distribut...
Abstract—We consider distributed optimization in random net-works where nodes cooperatively minimize...