International audienceWe present a new family of min-max optimization algorithms that automatically exploit the geometry of the gradient data observed at earlier iterations to perform more informative extra-gradient steps in later ones. Thanks to this adaptation mechanism, the proposed method automatically detects whether the problem is smooth or not, without requiring any prior tuning by the optimizer. As a result, the algorithm simultaneously achieves order-optimal convergence rates, i.e., it converges to an ε-optimal solution within O(1/ε) iterations in smooth problems, and within O(1/ε 2) iterations in non-smooth ones.Importantly, these guarantees do not require any of the standard boundedness or Lipschitz continuity conditions that are...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}...
International audienceWe present a new family of min-max optimization algorithms that automatically ...
Plusieurs problèmes importants issus de l'apprentissage statistique et de la science des données imp...
Several important problems in learning theory and data science involve high-dimensional optimization...
International audienceWe propose a new family of adaptive first-order methods for a class of convex ...
Min-max optimization is a classic problem with applications in constrained optimization, robust opti...
We present a new algorithm to solve min-max or min-min problems out of the convex world. We use rigi...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...
We consider first order gradient methods for effectively optimizing a composite objective in the for...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}...
International audienceWe present a new family of min-max optimization algorithms that automatically ...
Plusieurs problèmes importants issus de l'apprentissage statistique et de la science des données imp...
Several important problems in learning theory and data science involve high-dimensional optimization...
International audienceWe propose a new family of adaptive first-order methods for a class of convex ...
Min-max optimization is a classic problem with applications in constrained optimization, robust opti...
We present a new algorithm to solve min-max or min-min problems out of the convex world. We use rigi...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...
We consider first order gradient methods for effectively optimizing a composite objective in the for...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}...