In this paper, we present an analysis of the convergence rate of gradient descent with a varying step size, on strongly convex functions. We assume that a line search has been carried out and produces a step size that varies in a known interval. The algorithm is then handled as a linear, parameter-varying (LPV) system. Building on prior work that uses Integral Quadratic Constraints (IQCs) to analyze optimization algorithms, we construct a linear matrix inequality (LMI) condition to numerically obtain convergence rates. For the LPV system, this condition is solved by a gridding approach on the step size interval. Our results indicate that the algorithm converges in a restricted set within the step size interval. Further, when this interval r...
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) ...
The study of first-order optimization is sensitive to the assumptions made on the objective function...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...
The framework of Integral Quadratic Constraints (IQCs) is used to present a performance analysis for...
Based on a result by Taylor et al. (J Optim Theory Appl 178(2):455–476, 2018) on the attainable conv...
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. We suggest simple modificati...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
This paper revisits the Polyak step size schedule for convex optimization problems, proving that a s...
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An import...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
Abstract-This paper considers some aspects of a gradient projection method proposed by Goldstein [l]...
We show that the basic stochastic gradient method applied to a strongly-convex differentiable functi...
In this paper, we propose an interior-point method for linearly constrained optimization problems (p...
We extend the previous analysis of Schmidt et al. [2011] to derive the linear convergence rate obtai...
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) ...
The study of first-order optimization is sensitive to the assumptions made on the objective function...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...
The framework of Integral Quadratic Constraints (IQCs) is used to present a performance analysis for...
Based on a result by Taylor et al. (J Optim Theory Appl 178(2):455–476, 2018) on the attainable conv...
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. We suggest simple modificati...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
This manuscript develops a new framework to analyze and design iterative opti-mization algorithms bu...
This paper revisits the Polyak step size schedule for convex optimization problems, proving that a s...
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An import...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
Abstract-This paper considers some aspects of a gradient projection method proposed by Goldstein [l]...
We show that the basic stochastic gradient method applied to a strongly-convex differentiable functi...
In this paper, we propose an interior-point method for linearly constrained optimization problems (p...
We extend the previous analysis of Schmidt et al. [2011] to derive the linear convergence rate obtai...
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) ...
The study of first-order optimization is sensitive to the assumptions made on the objective function...
In the paper, we propose a class of faster adaptive Gradient Descent Ascent (GDA) methods for solvin...