Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine learning applications, etc. In this paper we study a subclass of distributed optimization, namely decentralized optimization in a non-smooth setting. Decentralized means that $m$ agents (machines) working in parallel on one problem communicate only with the neighbors agents (machines), i.e. there is no (central) server through which agents communicate. And by non-smooth setting we mean that each agent has a convex stochastic non-smooth function, that is, agents can hold and communicate information only about the value of the objective function, which corresponds to a gradient-free oracle. In this paper, to minimize the global objective functio...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Decentralized learning over distributed datasets can have significantly different data distributions...
Distributed adaptive stochastic gradient methods have been widely used for large-scale nonconvex opt...
We consider decentralized gradient-free optimization of minimizing Lipschitz continuous functions th...
The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper deve...
Decentralized optimization with time-varying networks is an emerging paradigm in machine learning. I...
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-a...
We study the consensus decentralized optimization problem where the objective function is the averag...
The first part of this dissertation considers distributed learning problems over networked agents. T...
In this paper, we propose a distributed stochastic second-order proximal method that enables agents ...
17 pagesInternational audienceIn this work, we consider the distributed optimization of non-smooth c...
In this letter, we first propose a \underline{Z}eroth-\underline{O}rder c\underline{O}ordinate \unde...
Large scale convex-concave minimax problems arise in numerous applications, including game theory, r...
International audienceThis article addresses a distributed optimization problem in a communication n...
Decentralized optimization, particularly the class of decentralized composite convex optimization (D...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Decentralized learning over distributed datasets can have significantly different data distributions...
Distributed adaptive stochastic gradient methods have been widely used for large-scale nonconvex opt...
We consider decentralized gradient-free optimization of minimizing Lipschitz continuous functions th...
The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper deve...
Decentralized optimization with time-varying networks is an emerging paradigm in machine learning. I...
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-a...
We study the consensus decentralized optimization problem where the objective function is the averag...
The first part of this dissertation considers distributed learning problems over networked agents. T...
In this paper, we propose a distributed stochastic second-order proximal method that enables agents ...
17 pagesInternational audienceIn this work, we consider the distributed optimization of non-smooth c...
In this letter, we first propose a \underline{Z}eroth-\underline{O}rder c\underline{O}ordinate \unde...
Large scale convex-concave minimax problems arise in numerous applications, including game theory, r...
International audienceThis article addresses a distributed optimization problem in a communication n...
Decentralized optimization, particularly the class of decentralized composite convex optimization (D...
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized ...
Decentralized learning over distributed datasets can have significantly different data distributions...
Distributed adaptive stochastic gradient methods have been widely used for large-scale nonconvex opt...