The study of first-order optimization is sensitive to the assumptions made on the objective functions. These assumptions induce complexity classes which play a key role in worst-case analysis, including the fundamental concept of algorithm optimality. Recent work argues that strong convexity and smoothness, popular assumptions in literature, lead to a pathological definition of the condition number (Guille-Escuret et al., 2021). Motivated by this result, we focus on the class of functions satisfying a lower restricted secant inequality and an upper error bound. On top of being robust to the aforementioned pathological behavior and including some non-convex functions, this pair of conditions displays interesting geometrical properties. In pa...
We consider the gradient (or steepest) descent method with exact line search applied to a strongly c...
For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based...
We show that the exact worst-case performance of fixed-step first-order methods for smooth (possibly...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
Convex optimization, the study of minimizing convex functions over convex sets, is host to a multit...
Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}...
Motivated by recent work of Renegar, we present new computational methods and associated computation...
This paper shows that error bounds can be used as effective tools for deriving complexity results fo...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
An influential line of recent work has focused on the generalization properties of unregularized gra...
We consider the gradient (or steepest) descent method with exact line search applied to a strongly c...
We analyze worst-case convergence guarantees of first-order optimization methods over a function cla...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We provide a lower bound showing that the $O(1/k)$ convergence rate of the NoLips method (a.k.a. Bre...
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessa...
We consider the gradient (or steepest) descent method with exact line search applied to a strongly c...
For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based...
We show that the exact worst-case performance of fixed-step first-order methods for smooth (possibly...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
Convex optimization, the study of minimizing convex functions over convex sets, is host to a multit...
Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}...
Motivated by recent work of Renegar, we present new computational methods and associated computation...
This paper shows that error bounds can be used as effective tools for deriving complexity results fo...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
An influential line of recent work has focused on the generalization properties of unregularized gra...
We consider the gradient (or steepest) descent method with exact line search applied to a strongly c...
We analyze worst-case convergence guarantees of first-order optimization methods over a function cla...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We provide a lower bound showing that the $O(1/k)$ convergence rate of the NoLips method (a.k.a. Bre...
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessa...
We consider the gradient (or steepest) descent method with exact line search applied to a strongly c...
For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based...
We show that the exact worst-case performance of fixed-step first-order methods for smooth (possibly...