Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications, including adversarial learning, GANs, transport and robust optimization. With increasing data and problem sizes necessary to train high performing models across various applications, we need to rely on parallel and distributed computing. However, in distributed training, communication among the compute nodes is a key bottleneck during training, and this problem is exacerbated for high dimensional and over-parameterized models. Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
Training a large-scale model over a massive data set is an extremely computation and storage intensi...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
This paper focuses on the distributed optimization of stochastic saddle point problems. The first pa...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
We consider distributed optimization over several devices, each sending incremental model updates to...
Distributed optimization methods are often applied to solving huge-scale problems like training neur...
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated...
We develop a new approach to tackle communication constraints in a distributed learning problem with...
In the last few years, various communication compression techniques have emerged as an indispensable...
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
We present a new method that includes three key components of distributed optimization and federated...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
In distributed training of deep models, the transmission volume of stochastic gradients (SG) imposes...
In recent centralized nonconvex distributed learning and federated learning, local methods are one o...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
Training a large-scale model over a massive data set is an extremely computation and storage intensi...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
This paper focuses on the distributed optimization of stochastic saddle point problems. The first pa...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
We consider distributed optimization over several devices, each sending incremental model updates to...
Distributed optimization methods are often applied to solving huge-scale problems like training neur...
We introduce a framework - Artemis - to tackle the problem of learning in a distributed or federated...
We develop a new approach to tackle communication constraints in a distributed learning problem with...
In the last few years, various communication compression techniques have emerged as an indispensable...
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems....
We present a new method that includes three key components of distributed optimization and federated...
International audienceWe develop a new approach to tackle communication constraints in a distributed...
In distributed training of deep models, the transmission volume of stochastic gradients (SG) imposes...
In recent centralized nonconvex distributed learning and federated learning, local methods are one o...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
Training a large-scale model over a massive data set is an extremely computation and storage intensi...
Decentralized learning algorithms empower interconnected devices to share data and computational res...