Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversarial training can be formulated as minimax optimization. Gradient descent ascent (GDA) is the most commonly used algorithm due to its simplicity. However, GDA can converge to non-optimal minimax points. We propose a new minimax optimization framework, GDA-AM, that views the GDAdynamics as a fixed-point iteration and solves it using Anderson Mixing to con-verge to the local minimax. It addresses the diverging issue of simultaneous GDAand accelerates the convergence of alternating GDA. We show theoretically that the algorithm can achieve global convergence for bilinear problems under mild conditions. We also empirically show that GDA-AMsolves a v...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
© 2018 Curran Associates Inc.All rights reserved. Motivated by applications in Optimization, Game Th...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
Recent years has seen a surge of interest in building learning machines through adversarial training...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...
Alternating gradient-descent-ascent (AltGDA) is an optimization algorithm that has been widely used ...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as gene...
In recent years, federated minimax optimization has attracted growing interest due to its extensive ...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}}...
Minimax problems, such as generative adversarial network, adversarial training, and fair training, a...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
Data-driven machine learning methods have achieved impressive performance for many industrial applic...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
© 2018 Curran Associates Inc.All rights reserved. Motivated by applications in Optimization, Game Th...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...
Recent years has seen a surge of interest in building learning machines through adversarial training...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...
Alternating gradient-descent-ascent (AltGDA) is an optimization algorithm that has been widely used ...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as gene...
In recent years, federated minimax optimization has attracted growing interest due to its extensive ...
Many fundamental machine learning tasks can be formulated as min-max optimization. This motivates us...
We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}}...
Minimax problems, such as generative adversarial network, adversarial training, and fair training, a...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
Data-driven machine learning methods have achieved impressive performance for many industrial applic...
We study a variant of a recently introduced min-max optimization framework where the max-player is c...
One of the fundamental limitations of artificial neural network learning by gradient descent is the ...
© 2018 Curran Associates Inc.All rights reserved. Motivated by applications in Optimization, Game Th...
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent ...