A key challenge in smooth games is that there is no general guarantee for gradient methods to converge to an equilibrium. Recently, Chavdarova et al. (2021) reported a promising empirical observation that Lookahead (Zhang et al., 2019) significantly improves GAN training. While promising, few theoretical guarantees has been studied for Lookahead in smooth games. In this work, we establish the first convergence guarantees of Lookahead for smooth games. We present a spectral analysis and provide a geometric explanation of how and when it actually improves the convergence around a stationary point. Based on the analysis, we derive sufficient conditions for Lookahead to stabilize or accelerate the local convergence in smooth games. Our study re...
International audienceWe investigate the issues of existence and efficiency of lookahead equilibria ...
PPAD and PLS are successful classes that capture the complexity of important game-theoretic problems...
International audienceIn game-theoretic learning, several agents are simultaneously following their ...
In this paper, we examine the equilibrium tracking and convergence properties of no-regret learning ...
In this work we look at the recent results in policy gradient learning in a general-sum game scenari...
We study the problem of convergence to a stationary point in zero-sum games. We propose competitive ...
International audienceIn this paper, we examine the convergence rate of a wide range of regularized ...
International audienceOwing to their connection with generative adversarial networks (GANs), saddle-...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
A growing number of learning methods are actually differentiable games whose players optimise multip...
A growing number of learning methods are actually differentiable games whose players optimise multip...
In this paper, we propose a second-order extension of the continuous-time game-theoretic mirror desc...
International audienceOnline Mirror Descent (OMD) is an important andwidely used class of adaptive l...
43 pages, 2 tablesInternational audienceLearning in stochastic games is a notoriously difficult prob...
Entropy regularized optimal transport (EOT) distance and its symmetric normalization, known as the S...
International audienceWe investigate the issues of existence and efficiency of lookahead equilibria ...
PPAD and PLS are successful classes that capture the complexity of important game-theoretic problems...
International audienceIn game-theoretic learning, several agents are simultaneously following their ...
In this paper, we examine the equilibrium tracking and convergence properties of no-regret learning ...
In this work we look at the recent results in policy gradient learning in a general-sum game scenari...
We study the problem of convergence to a stationary point in zero-sum games. We propose competitive ...
International audienceIn this paper, we examine the convergence rate of a wide range of regularized ...
International audienceOwing to their connection with generative adversarial networks (GANs), saddle-...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
A growing number of learning methods are actually differentiable games whose players optimise multip...
A growing number of learning methods are actually differentiable games whose players optimise multip...
In this paper, we propose a second-order extension of the continuous-time game-theoretic mirror desc...
International audienceOnline Mirror Descent (OMD) is an important andwidely used class of adaptive l...
43 pages, 2 tablesInternational audienceLearning in stochastic games is a notoriously difficult prob...
Entropy regularized optimal transport (EOT) distance and its symmetric normalization, known as the S...
International audienceWe investigate the issues of existence and efficiency of lookahead equilibria ...
PPAD and PLS are successful classes that capture the complexity of important game-theoretic problems...
International audienceIn game-theoretic learning, several agents are simultaneously following their ...