The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piecewise linear lower model on the smooth part of the objective. However, this model cannot be solved exactly and IGMM relies on an inaccuracy term \delta. The need for a bound on inexactness narrows the range of problems to which IGMM can be applied. In addition, \delta carries over to the worst-case convergence rate. In this work, we show how a simple modification of IGMM eliminates the reliance on \delta for convergence. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method (NoLips) and has a worst-case rate of O(1/k), recently shown to be optimal for its c...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
International audienceWe present some extensions to the limited memory steepest descent method based...
The recently introduced Gradient Methods with Memory use a subset of the past oracle information to ...
In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the...
The memory gradient method is used for unconstrained optimization, especially large scale problems. ...
Noise is inherited in many optimization methods such as stochastic gradient methods, zeroth-order me...
AbstractIn this paper, we present a multi-step memory gradient method with Goldstein line search for...
We provide a lower bound showing that the $O(1/k)$ convergence rate of the NoLips method (a.k.a. Bre...
In this paper we present a new memory gradient method with trust region for unconstrained optimizati...
Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those met...
The mirror descent algorithm (MDA) generalizes gradient descent by using a Bregman divergence to rep...
International audienceThe gradient method with retards (GMR) is a nonmonotone iterative method recen...
© 2018 Curran Associates Inc..All rights reserved. Classically, the time complexity of a first-order...
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An import...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
International audienceWe present some extensions to the limited memory steepest descent method based...
The recently introduced Gradient Methods with Memory use a subset of the past oracle information to ...
In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the...
The memory gradient method is used for unconstrained optimization, especially large scale problems. ...
Noise is inherited in many optimization methods such as stochastic gradient methods, zeroth-order me...
AbstractIn this paper, we present a multi-step memory gradient method with Goldstein line search for...
We provide a lower bound showing that the $O(1/k)$ convergence rate of the NoLips method (a.k.a. Bre...
In this paper we present a new memory gradient method with trust region for unconstrained optimizati...
Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those met...
The mirror descent algorithm (MDA) generalizes gradient descent by using a Bregman divergence to rep...
International audienceThe gradient method with retards (GMR) is a nonmonotone iterative method recen...
© 2018 Curran Associates Inc..All rights reserved. Classically, the time complexity of a first-order...
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An import...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
International audienceWe present some extensions to the limited memory steepest descent method based...