In this thesis, we deal with descent methods for functional minimalization. We discuss three conditions for the choice of the step length (Armijo, Goldstein, and Wolfe condition) and four descent methods (The steepest descent method, Newton's method, Quasi-Newton's method BFGS and the conjugate gradient method). We discuss their convergence properties and their advantages and dis- advantages. Finally, we test these methods numerically in the GNU Octave pro- gramming system on three different functions with different number of variables.
Over the past twelve years, multi-step quasi-Newton methods for the unconstrained optimization of a ...
AbstractWe study the global convergence of a two-parameter family of conjugate gradient methods in w...
Newton's method plays a central role in the development of numerical techniques for optimizatio...
Abstract. We discuss the convergence of line search methods for minimization. We explain how Newton’...
We discuss the convergence of line search methods for minimization. We explain how Newton's method a...
Abstract. An efficient descent method for unconstrained optimization problems is line search method ...
In this paper the problem of line search, an important step in most multidimensional optimization al...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The steepest descent method has a rich history and is one of the simplest and best known methods for...
In a recent paper we have introduced general descent curves to solve unconstrained optimization prob...
Abstract: Let the least value of the function F (x), x∈Rn, be required, where n ≥ 2. If the gradient...
The nonlinear minimization problem is to find a (local) minimizer for an objective function f(·), wh...
Over the past twelve years, multi-step quasi-Newton methods for the unconstrained optimization of a ...
AbstractWe study the global convergence of a two-parameter family of conjugate gradient methods in w...
Newton's method plays a central role in the development of numerical techniques for optimizatio...
Abstract. We discuss the convergence of line search methods for minimization. We explain how Newton’...
We discuss the convergence of line search methods for minimization. We explain how Newton's method a...
Abstract. An efficient descent method for unconstrained optimization problems is line search method ...
In this paper the problem of line search, an important step in most multidimensional optimization al...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2]...
The steepest descent method has a rich history and is one of the simplest and best known methods for...
In a recent paper we have introduced general descent curves to solve unconstrained optimization prob...
Abstract: Let the least value of the function F (x), x∈Rn, be required, where n ≥ 2. If the gradient...
The nonlinear minimization problem is to find a (local) minimizer for an objective function f(·), wh...
Over the past twelve years, multi-step quasi-Newton methods for the unconstrained optimization of a ...
AbstractWe study the global convergence of a two-parameter family of conjugate gradient methods in w...
Newton's method plays a central role in the development of numerical techniques for optimizatio...