International audienceIn this paper, we consider multi-agent learning via online gradient descent in a class of games called λ-cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes unconstrained strongly monotone games. We characterize the finite-time lastiterate convergence rate for joint OGD learning on λ-cocoercive games; further, building on this result, we develop a fully adaptive OGD learning algorithm that does not require any knowledge of problem parameter (e.g. cocoercive constant λ) and show, via a novel double-stopping time technique, that this adaptive algorithm achieves same finite-time last-iterate convergence rate as nonadaptive counterpart. Subsequently, we extend OGD lea...
International audienceThis paper examines the problem of multi-agent learning in N-person non-cooper...
We study the repeated, non-atomic routing game, in which selfish players make a sequence of rout-ing...
International audienceIn this paper, we examine the convergence rate of a wide range of regularized ...
International audienceOnline Mirror Descent (OMD) is an important andwidely used class of adaptive l...
34 pagesInternational audienceWe examine the long-run behavior of multi-agent online learning in gam...
39 pages, 6 figures, 1 tableWe develop a unified stochastic approximation framework for analyzing th...
International audienceWe consider a game-theoretical multi-agent learning problem where the feedback...
This paper examines the convergence of a broad class of distributed learning dynamics for games with...
International audienceIn this paper, we examine the equilibrium tracking and convergence properties ...
We consider online learning in multi-player smooth monotone games. Existing algorithms have limitati...
In this paper, we address the problem of convergence to Nash equilibria in games with rewards that a...
This paper studies a class of strongly monotone games involving non-cooperative agents that optimize...
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonic...
International audienceThis paper examines the problem of multi-agent learning in N-person non-cooper...
We study the repeated, non-atomic routing game, in which selfish players make a sequence of rout-ing...
International audienceIn this paper, we examine the convergence rate of a wide range of regularized ...
International audienceOnline Mirror Descent (OMD) is an important andwidely used class of adaptive l...
34 pagesInternational audienceWe examine the long-run behavior of multi-agent online learning in gam...
39 pages, 6 figures, 1 tableWe develop a unified stochastic approximation framework for analyzing th...
International audienceWe consider a game-theoretical multi-agent learning problem where the feedback...
This paper examines the convergence of a broad class of distributed learning dynamics for games with...
International audienceIn this paper, we examine the equilibrium tracking and convergence properties ...
We consider online learning in multi-player smooth monotone games. Existing algorithms have limitati...
In this paper, we address the problem of convergence to Nash equilibria in games with rewards that a...
This paper studies a class of strongly monotone games involving non-cooperative agents that optimize...
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonic...
International audienceThis paper examines the problem of multi-agent learning in N-person non-cooper...
We study the repeated, non-atomic routing game, in which selfish players make a sequence of rout-ing...
International audienceIn this paper, we examine the convergence rate of a wide range of regularized ...