We consider decentralized gradient-free optimization of minimizing Lipschitz continuous functions that satisfy neither smoothness nor convexity assumption. We propose two novel gradient-free algorithms, the Decentralized Gradient-Free Method (DGFM) and its variant, the Decentralized Gradient-Free Method$^+$ (DGFM$^{+}$). Based on the techniques of randomized smoothing and gradient tracking, DGFM requires the computation of the zeroth-order oracle of a single sample in each iteration, making it less demanding in terms of computational resources for individual computing nodes. Theoretically, DGFM achieves a complexity of $\mathcal O(d^{3/2}\delta^{-1}\varepsilon ^{-4})$ for obtaining an $(\delta,\varepsilon)$-Goldstein stationary point. DGFM$...
Recently decentralized optimization attracts much attention in machine learning because it is more c...
Decentralized learning over distributed datasets can have significantly different data distributions...
Decentralized stochastic gradient descent methods have attracted increasing interest in recent years...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
International audienceWe consider the problem of training machine learning models on distributed dat...
We study the consensus decentralized optimization problem where the objective function is the averag...
We consider stochastic convex optimization problems with affine constraints and develop several meth...
The stability and generalization of stochastic gradient-based methods provide valuable insights into...
The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper deve...
Decentralized optimization algorithms have attracted intensive interests recently, as it has a balan...
We study stochastic decentralized optimization for the problem of training machine learning models w...
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-a...
International audienceDecentralized optimization algorithms have received much attention due to the ...
The minimax optimization over Riemannian manifolds (possibly nonconvex constraints) has been activel...
In this letter, we first propose a \underline{Z}eroth-\underline{O}rder c\underline{O}ordinate \unde...
Recently decentralized optimization attracts much attention in machine learning because it is more c...
Decentralized learning over distributed datasets can have significantly different data distributions...
Decentralized stochastic gradient descent methods have attracted increasing interest in recent years...
Distributed optimization has a rich history. It has demonstrated its effectiveness in many machine l...
International audienceWe consider the problem of training machine learning models on distributed dat...
We study the consensus decentralized optimization problem where the objective function is the averag...
We consider stochastic convex optimization problems with affine constraints and develop several meth...
The stability and generalization of stochastic gradient-based methods provide valuable insights into...
The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper deve...
Decentralized optimization algorithms have attracted intensive interests recently, as it has a balan...
We study stochastic decentralized optimization for the problem of training machine learning models w...
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-a...
International audienceDecentralized optimization algorithms have received much attention due to the ...
The minimax optimization over Riemannian manifolds (possibly nonconvex constraints) has been activel...
In this letter, we first propose a \underline{Z}eroth-\underline{O}rder c\underline{O}ordinate \unde...
Recently decentralized optimization attracts much attention in machine learning because it is more c...
Decentralized learning over distributed datasets can have significantly different data distributions...
Decentralized stochastic gradient descent methods have attracted increasing interest in recent years...