In smooth strongly convex optimization, or in the presence of H\"olderian error bounds, knowledge of the curvature parameter is critical for obtaining simple methods with accelerated rates. In this work, we study a class of methods, based on Polyak steps, where this knowledge is substituted by that of the optimal value, $f_*$. We first show slightly improved convergence bounds than previously known for the classical case of simple gradient descent with Polyak steps, we then derive an accelerated gradient method with Polyak steps and momentum, along with convergence guarantees
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
In this paper we prove a new complexity bound for a variant of Accelerated Coordinate Descent Method...
We propose two Polyak-type step sizes for mirror descent and prove their convergences for minimizing...
Accepted to COLT2020International audienceIn smooth strongly convex optimization, or in the presence...
Accelerated algorithms for minimizing smooth strongly convex functions usually require knowledge of ...
Accelerated algorithms for minimizing smooth strongly convex functions usually require knowledge of ...
This paper revisits the Polyak step size schedule for convex optimization problems, proving that a s...
Abstract-This paper considers some aspects of a gradient projection method proposed by Goldstein [l]...
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An import...
Polyak-{\L}ojasiewicz (PL) [Polyak, 1963] condition is a weaker condition than the strong convexity ...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
In this article, we investigate an accelerated first-order method, namely, the method of similar tri...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
Abstract. We modify Nesterov’s constant step gradient method for strongly convex functions with Lips...
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
In this paper we prove a new complexity bound for a variant of Accelerated Coordinate Descent Method...
We propose two Polyak-type step sizes for mirror descent and prove their convergences for minimizing...
Accepted to COLT2020International audienceIn smooth strongly convex optimization, or in the presence...
Accelerated algorithms for minimizing smooth strongly convex functions usually require knowledge of ...
Accelerated algorithms for minimizing smooth strongly convex functions usually require knowledge of ...
This paper revisits the Polyak step size schedule for convex optimization problems, proving that a s...
Abstract-This paper considers some aspects of a gradient projection method proposed by Goldstein [l]...
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An import...
Polyak-{\L}ojasiewicz (PL) [Polyak, 1963] condition is a weaker condition than the strong convexity ...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
In this article, we investigate an accelerated first-order method, namely, the method of similar tri...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
Abstract. We modify Nesterov’s constant step gradient method for strongly convex functions with Lips...
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
In this paper we prove a new complexity bound for a variant of Accelerated Coordinate Descent Method...
We propose two Polyak-type step sizes for mirror descent and prove their convergences for minimizing...