31 p. ; mars 2022An Adagrad-inspired class of algorithms for smooth unconstrained optimization is presented in which the objective function is never evaluated and yet the gradient norms decrease at least as fast as $\calO(1/\sqrt{k+1})$ while second-order optimality measures converge to zero at least as fast as $\calO(1/(k+1)^{1/3})$. This latter rate of convergence is shown to be essentially sharp and is identical to that known for more standard algorithms (like trust-region or adaptive-regularization methods) using both function and derivatives' evaluations. A related "divergent stepsize" method is also described, whose essentially sharp rate of convergence is slighly inferior. It is finally discussed how to obtain weaker second-order opt...
AbstractThis paper examines worst-case evaluation bounds for finding weak minimizers in unconstraine...
In this paper, we suggest new universal second-order methods for unconstrained minimization of twice...
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization ...
31 p. ; mars 2022An Adagrad-inspired class of algorithms for smooth unconstrained optimization is pr...
An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p...
We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex op...
An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p...
We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex op...
An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which ...
3 figuresA class of algorithms for unconstrained nonconvex optimization is considered where the valu...
An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which ...
International audienceIn order to be provably convergent towards a second-order stationary point, op...
International audienceAbstract An adaptive regularization algorithm (AR$1p$GN) for unconstrained non...
International audienceAbstract An adaptive regularization algorithm (AR$1p$GN) for unconstrained non...
In order to be provably convergent towards a second-order stationary point, optimization methods app...
AbstractThis paper examines worst-case evaluation bounds for finding weak minimizers in unconstraine...
In this paper, we suggest new universal second-order methods for unconstrained minimization of twice...
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization ...
31 p. ; mars 2022An Adagrad-inspired class of algorithms for smooth unconstrained optimization is pr...
An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p...
We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex op...
An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p...
We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex op...
An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which ...
3 figuresA class of algorithms for unconstrained nonconvex optimization is considered where the valu...
An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which ...
International audienceIn order to be provably convergent towards a second-order stationary point, op...
International audienceAbstract An adaptive regularization algorithm (AR$1p$GN) for unconstrained non...
International audienceAbstract An adaptive regularization algorithm (AR$1p$GN) for unconstrained non...
In order to be provably convergent towards a second-order stationary point, optimization methods app...
AbstractThis paper examines worst-case evaluation bounds for finding weak minimizers in unconstraine...
In this paper, we suggest new universal second-order methods for unconstrained minimization of twice...
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization ...