© 2018 Curran Associates Inc.All rights reserved. Motivated by applications in Optimization, Game Theory, and the training of Generative Adversarial Networks, the convergence properties of first order methods in min-max problems have received extensive study. It has been recognized that they may cycle, and there is no good understanding of their limit points when they do not. When they converge, do they converge to local min-max solutions? We characterize the limit points of two basic first order methods, namely Gradient Descent/Ascent (GDA) and Optimistic Gradient Descent Ascent (OGDA). We show that both dynamics avoid unstable critical points for almost all initializations. Moreover, for small step sizes and under mild assumptions, the se...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...
© Constantinos Daskalakis and Ioannis Panageas. Motivated by applications in Game Theory, Optimizati...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
© 2020 Society for Industrial and Applied Mathematics We study the iteration complexity of the optim...
Compared to minimization, the min-max optimization in machine learning applications is considerably ...
We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting so-called \emp...
International audienceWe present a new family of min-max optimization algorithms that automatically ...
Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversari...
Nonconvex-concave min-max problem arises in many machine learning applications including minimizing ...
This note discusses proofs for convergence of first-order methods based on simple potential-function...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
We develop a general approach to convergence analysis of feasible descent methods in the presence of...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...
© Constantinos Daskalakis and Ioannis Panageas. Motivated by applications in Game Theory, Optimizati...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
© 2020 Society for Industrial and Applied Mathematics We study the iteration complexity of the optim...
Compared to minimization, the min-max optimization in machine learning applications is considerably ...
We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting so-called \emp...
International audienceWe present a new family of min-max optimization algorithms that automatically ...
Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversari...
Nonconvex-concave min-max problem arises in many machine learning applications including minimizing ...
This note discusses proofs for convergence of first-order methods based on simple potential-function...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
We develop a general approach to convergence analysis of feasible descent methods in the presence of...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...