The steepest descent method has a rich history and is one of the simplest and best known methods for minimizing a function. While the method is not commonly used in practice due to its slow convergence rate, understanding the convergence properties of this method can lead to a better understanding of many of the more sophisticated optimization methods. Here, we give a short introduction and discuss some of the advantages and disadvantages of this method. Some recent results on modified versions of the steepest descent method are also discussed
One of the spread first level methods of optimum search is learned by the steepest descent method in...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
Steepest descent method is a simple gradient method for optimization. This method has a slow converg...
The method of steepest descent is used to minimize typical functionals from elasticity
It is well known that the minimization of a smooth function f (x) is equivalent to minimizing its gr...
The worst-case complexity of the steepest-descent algorithm with exact line-searches for unconstrain...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/1...
The negative gradient direction to find local minimizers has been associated with the classical stee...
AbstractThe method of steepest descent, also known as the saddle-point method, is a natural developm...
The Steepest descent method and the Conjugate gradient method to minimize nonlinear functions have b...
A discrete steepest ascent method which allows controls which are not piecewise constant (for exampl...
For unconstrained optimization, the two-point stepsize gradient method was shown to prefer over the ...
In this article we consider the problem of finding the solution of a system of differential inequal...
In this thesis, we deal with descent methods for functional minimalization. We discuss three conditi...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
Steepest descent method is a simple gradient method for optimization. This method has a slow converg...
The method of steepest descent is used to minimize typical functionals from elasticity
It is well known that the minimization of a smooth function f (x) is equivalent to minimizing its gr...
The worst-case complexity of the steepest-descent algorithm with exact line-searches for unconstrain...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/1...
The negative gradient direction to find local minimizers has been associated with the classical stee...
AbstractThe method of steepest descent, also known as the saddle-point method, is a natural developm...
The Steepest descent method and the Conjugate gradient method to minimize nonlinear functions have b...
A discrete steepest ascent method which allows controls which are not piecewise constant (for exampl...
For unconstrained optimization, the two-point stepsize gradient method was shown to prefer over the ...
In this article we consider the problem of finding the solution of a system of differential inequal...
In this thesis, we deal with descent methods for functional minimalization. We discuss three conditi...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
One of the spread first level methods of optimum search is learned by the steepest descent method in...
One of the spread first level methods of optimum search is learned by the steepest descent method in...