AbstractIn this paper, we present a multi-step memory gradient method with Goldstein line search for unconstrained optimization problems and prove its global convergence under some mild conditions. We also prove the linear convergence rate of the new method when the objective function is uniformly convex. Numerical results show that the new algorithm is suitable to solve large-scale optimization problems and is more stable than other similar methods in practical computation
The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Metho...
In this paper, a modified conjugate gradient method is presented for solving large-scale unconstrain...
In this paper, on the basis of the DFP method a class of non-quasi-Newton methods is presented. Unde...
AbstractIn this paper, we present a multi-step memory gradient method with Goldstein line search for...
The memory gradient method is used for unconstrained optimization, especially large scale problems. ...
In this paper we present a new memory gradient method with trust region for unconstrained optimizati...
AbstractIn this paper, a new gradient-related algorithm for solving large-scale unconstrained optimi...
In this paper we present a new memory gradient method with trust region for unconstrained optimizati...
In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the...
Gradient methods are popular due to the fact that only gradient of the objective function is require...
: It is proved that the new conjugate gradient method proposed by Dai and Yuan [5] produces a descen...
AbstractIn this paper, we propose a new nonmonotone line search technique for unconstrained optimiza...
In this paper we develop a general convergence theory for nonmonotone line searches in optimization ...
Abstract. An efficient descent method for unconstrained optimization problems is line search method ...
In this paper, we introduce a new line search technique, then employ it to construct a novel acceler...
The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Metho...
In this paper, a modified conjugate gradient method is presented for solving large-scale unconstrain...
In this paper, on the basis of the DFP method a class of non-quasi-Newton methods is presented. Unde...
AbstractIn this paper, we present a multi-step memory gradient method with Goldstein line search for...
The memory gradient method is used for unconstrained optimization, especially large scale problems. ...
In this paper we present a new memory gradient method with trust region for unconstrained optimizati...
AbstractIn this paper, a new gradient-related algorithm for solving large-scale unconstrained optimi...
In this paper we present a new memory gradient method with trust region for unconstrained optimizati...
In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the...
Gradient methods are popular due to the fact that only gradient of the objective function is require...
: It is proved that the new conjugate gradient method proposed by Dai and Yuan [5] produces a descen...
AbstractIn this paper, we propose a new nonmonotone line search technique for unconstrained optimiza...
In this paper we develop a general convergence theory for nonmonotone line searches in optimization ...
Abstract. An efficient descent method for unconstrained optimization problems is line search method ...
In this paper, we introduce a new line search technique, then employ it to construct a novel acceler...
The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Metho...
In this paper, a modified conjugate gradient method is presented for solving large-scale unconstrain...
In this paper, on the basis of the DFP method a class of non-quasi-Newton methods is presented. Unde...