We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between workers. We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly monotone, monotone, and non-monotone settings when a Minty solution exists. The provi...
This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) algorithm to solve distribute...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
In this paper we consider online distributed learning problems. Online distributed learning refers t...
We study the consensus decentralized optimization problem where the objective function is the averag...
This paper focuses on the distributed optimization of stochastic saddle point problems. The first pa...
Decentralized optimization with time-varying networks is an emerging paradigm in machine learning. I...
We study stochastic decentralized optimization for the problem of training machine learning models w...
Federated learning, where algorithms are trained across multiple decentralized devices without shari...
The first part of this dissertation considers distributed learning problems over networked agents. T...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly becaus...
International audienceWe consider the problem of training machine learning models on distributed dat...
Decentralized optimization, particularly the class of decentralized composite convex optimization (D...
This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) algorithm to solve distribute...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
In this paper we consider online distributed learning problems. Online distributed learning refers t...
We study the consensus decentralized optimization problem where the objective function is the averag...
This paper focuses on the distributed optimization of stochastic saddle point problems. The first pa...
Decentralized optimization with time-varying networks is an emerging paradigm in machine learning. I...
We study stochastic decentralized optimization for the problem of training machine learning models w...
Federated learning, where algorithms are trained across multiple decentralized devices without shari...
The first part of this dissertation considers distributed learning problems over networked agents. T...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
One of the key challenges in decentralized and federated learning is to design algorithms that effic...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly becaus...
International audienceWe consider the problem of training machine learning models on distributed dat...
Decentralized optimization, particularly the class of decentralized composite convex optimization (D...
This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) algorithm to solve distribute...
As an emerging paradigm considering data privacy and transmission efficiency, decentralized learning...
In this paper we consider online distributed learning problems. Online distributed learning refers t...