International audienceThis paper examines the convergence of a broad classof distributed learning dynamics for games with continuous actionsets. The dynamics under study comprise a multi-agent generalization of Nesterov’s dual averaging (DA) method, a primal-dualmirror descent method that has recently seen a major resurgencein the field of large-scale optimization and machine learning. Toaccount for settings with high temporal variability and uncertainty, we adopt a continuous-time formulation of dual averagingand we investigate the dynamics’ long-run behavior when playershave either noiseless or noisy information on their payoffgradients. In both the deterministic and stochastic regimes, we establishsublinear rates of convergence o...
Fudenberg and Kreps (1993) consider adaptive learning processes, in the spirit of ctitious play, for...
International audienceIn game-theoretic learning, several agents are simultaneously following their ...
39 pages, 6 figures, 1 tableWe develop a unified stochastic approximation framework for analyzing th...
International audienceThis paper examines the convergence of a broad classof distributed learning dy...
International audienceOnline Mirror Descent (OMD) is an important andwidely used class of adaptive l...
International audienceWhile payoff-based learning models are almost exclusively devised for finite a...
International audienceIn this paper, we examine the equilibrium tracking and convergence properties ...
Motivated by the recent applications of game-theoretical learning to the design of distributed contr...
We study how long it takes for large populations of interacting agents to come close to Nash equilib...
Motivated by the recent applications of game-theoretical learning techniques to the design of distri...
International audienceMotivated by the scarcity of accurate payoff feedback in practical application...
One issue in multi-agent co-adaptive learning concerns convergence. When two (or more) agents play a...
We consider a system of single- or double-integrator agents playing a generalized Nash game over a n...
International audienceStarting from a heuristic learning scheme for N-person games, we derive a new ...
Fudenberg and Kreps (1993) consider adaptive learning processes, in the spirit of ctitious play, for...
International audienceIn game-theoretic learning, several agents are simultaneously following their ...
39 pages, 6 figures, 1 tableWe develop a unified stochastic approximation framework for analyzing th...
International audienceThis paper examines the convergence of a broad classof distributed learning dy...
International audienceOnline Mirror Descent (OMD) is an important andwidely used class of adaptive l...
International audienceWhile payoff-based learning models are almost exclusively devised for finite a...
International audienceIn this paper, we examine the equilibrium tracking and convergence properties ...
Motivated by the recent applications of game-theoretical learning to the design of distributed contr...
We study how long it takes for large populations of interacting agents to come close to Nash equilib...
Motivated by the recent applications of game-theoretical learning techniques to the design of distri...
International audienceMotivated by the scarcity of accurate payoff feedback in practical application...
One issue in multi-agent co-adaptive learning concerns convergence. When two (or more) agents play a...
We consider a system of single- or double-integrator agents playing a generalized Nash game over a n...
International audienceStarting from a heuristic learning scheme for N-person games, we derive a new ...
Fudenberg and Kreps (1993) consider adaptive learning processes, in the spirit of ctitious play, for...
International audienceIn game-theoretic learning, several agents are simultaneously following their ...
39 pages, 6 figures, 1 tableWe develop a unified stochastic approximation framework for analyzing th...