In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solving the nonconvex-strongly-concave minimax problems by using unified adaptive matrices, which include almost existing coordinate-wise and global adaptive learning rates. In particular, we provide an effective convergence analysis framework for our adaptive GDA methods. Specifically, we propose a fast Adaptive Gradient Descent Ascent (AdaGDA) method based on the basic momentum technique, which reaches a lower gradient complexity of $O(\kappa^4\epsilon^{-4})$ for finding an $\epsilon$-stationary point without large batches, which improves the existing results of the adaptive GDA methods by a factor of $O(\sqrt{\kappa})$. At the same time, we pres...
In recent years, federated minimax optimization has attracted growing interest due to its extensive ...
This paper proposes the Doubly Compressed Momentum-assisted stochastic gradient tracking algorithm $...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...
We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}}...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-ma...
Large scale convex-concave minimax problems arise in numerous applications, including game theory, r...
Adaptive gradient methods have shown excellent performances for solving many machine learning proble...
Nonconvex-concave min-max problem arises in many machine learning applications including minimizing ...
Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversari...
Alternating gradient-descent-ascent (AltGDA) is an optimization algorithm that has been widely used ...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
We study convergence rates of AdaGrad-Norm as an exemplar of adaptive stochastic gradient methods (S...
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as gene...
In recent years, federated minimax optimization has attracted growing interest due to its extensive ...
This paper proposes the Doubly Compressed Momentum-assisted stochastic gradient tracking algorithm $...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Standard gradient descent-ascent (GDA)-type algorithms can only find stationary points in nonconvex ...
We consider nonconvex-concave minimax problems, $\min_{\mathbf{x}} \max_{\mathbf{y} \in \mathcal{Y}}...
In optimization, one notable gap between theoretical analyses and practice is that converging algori...
Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-ma...
Large scale convex-concave minimax problems arise in numerous applications, including game theory, r...
Adaptive gradient methods have shown excellent performances for solving many machine learning proble...
Nonconvex-concave min-max problem arises in many machine learning applications including minimizing ...
Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversari...
Alternating gradient-descent-ascent (AltGDA) is an optimization algorithm that has been widely used ...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
We study convergence rates of AdaGrad-Norm as an exemplar of adaptive stochastic gradient methods (S...
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as gene...
In recent years, federated minimax optimization has attracted growing interest due to its extensive ...
This paper proposes the Doubly Compressed Momentum-assisted stochastic gradient tracking algorithm $...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...