We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate ascent method. Stochastic dual coordinate ascent methods en-joy strong theoretical guarantees and often have better performances than stochas-tic gradient descent methods in optimizing regularized loss minimization prob-lems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coor-dinate ascent algorithm in a star network, with an analysis of the tradeoff be-tween computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods...
The first part of this dissertation considers distributed learning problems over networked agents. T...
International audienceWe consider a distributed stochastic optimization problem in networks with fin...
International audienceThis article addresses a distributed optimization problem in a communication n...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine lear...
International audienceWe propose distributed algorithms for high-dimensional sparse optimization. In...
We consider the problem of communication efficient distributed optimization where multiple nodes exc...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
We introduce a proximal version of dual coordinate ascent method. We demonstrate how the derived alg...
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to acce...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
The unprecedented rate at which data is being created and stored calls for scalable optimization te...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...
We introduce a proximal version of the stochas-tic dual coordinate ascent method and show how to acc...
This study addresses the stochastic optimization of a function unknown in closed form which can only...
The first part of this dissertation considers distributed learning problems over networked agents. T...
International audienceWe consider a distributed stochastic optimization problem in networks with fin...
International audienceThis article addresses a distributed optimization problem in a communication n...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine lear...
International audienceWe propose distributed algorithms for high-dimensional sparse optimization. In...
We consider the problem of communication efficient distributed optimization where multiple nodes exc...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
We introduce a proximal version of dual coordinate ascent method. We demonstrate how the derived alg...
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to acce...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
The unprecedented rate at which data is being created and stored calls for scalable optimization te...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...
We introduce a proximal version of the stochas-tic dual coordinate ascent method and show how to acc...
This study addresses the stochastic optimization of a function unknown in closed form which can only...
The first part of this dissertation considers distributed learning problems over networked agents. T...
International audienceWe consider a distributed stochastic optimization problem in networks with fin...
International audienceThis article addresses a distributed optimization problem in a communication n...