In this article, we consider convergence properties of the normalized subgradient method which employs the stepsize rule based on a priori knowledge of the optimal value of the cost function. We show that the normalized subgradients possess additional information about the problem under consideration, which can be used for improving convergence rates based on the usual subgradient properties. We also present several convergence results for inexact versions of the method
The recent literature on first order methods for smooth optimization shows that significant improvem...
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context ...
International audienceOwing to their stability and convergence speed, extragradient methods have bec...
In this article, we consider convergence properties of the normalized subgradient method which emplo...
Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the s...
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiabl...
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiabl...
In solving a mathematical program, the exact evaluation of the objective function and its subgradien...
Subgradient methods for constrained nondifferentiable problems benefit from projection of the search...
When non-smooth, convex minimization problems are solved by subgradient optimization methods, the su...
When non-smooth, convex minimization problems are solved by subgradient optimization methods, the su...
We study the subgradient projection method for convex optimization with Brannlund 's level cont...
AbstractA stochastic subgradient method for solving convex stochastic programming problems is consid...
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context ...
The recent literature on first order methods for smooth optimization shows that significant improvem...
The recent literature on first order methods for smooth optimization shows that significant improvem...
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context ...
International audienceOwing to their stability and convergence speed, extragradient methods have bec...
In this article, we consider convergence properties of the normalized subgradient method which emplo...
Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the s...
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiabl...
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiabl...
In solving a mathematical program, the exact evaluation of the objective function and its subgradien...
Subgradient methods for constrained nondifferentiable problems benefit from projection of the search...
When non-smooth, convex minimization problems are solved by subgradient optimization methods, the su...
When non-smooth, convex minimization problems are solved by subgradient optimization methods, the su...
We study the subgradient projection method for convex optimization with Brannlund 's level cont...
AbstractA stochastic subgradient method for solving convex stochastic programming problems is consid...
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context ...
The recent literature on first order methods for smooth optimization shows that significant improvem...
The recent literature on first order methods for smooth optimization shows that significant improvem...
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context ...
International audienceOwing to their stability and convergence speed, extragradient methods have bec...