We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for sequential quadratic programming–type algorithms
. Exact penalty methods for the solution of constrained optimization problems are based on the const...
We solve a general optimization problem, where only approximation sequences are known instead of exa...
This paper considers a class of constrained optimization problems with a possi-bly nonconvex non-Lip...
We consider nonconvex constrained optimization problems and propose a new approach to the convergenc...
We consider nonconvex constrained optimization problems and propose a newapproach to the convergence...
This is a companion paper to "Ghost penalties in nonconvex constrained optimization: Diminishing ste...
This is a companion paper to “Ghost penalties in nonconvex constrained optimization: Diminishing st...
In this paper, we first extend the diminishing stepsize method for nonconvex constrained problems pr...
We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of ...
Abstract. The convergence behaviour of a class of iterative methods for solving the constrained mini...
Optimization problems arise in science, engineering, economy, etc. and we need to find the best sol...
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solv...
We introduce the concept of partially strictly monotone functions and apply it to construct a class ...
We present a global error bound for the projected gradient of nonconvex constrained optimization pro...
The non-linear programming problem seeks to maximize a function f(x) where the n component vector x ...
. Exact penalty methods for the solution of constrained optimization problems are based on the const...
We solve a general optimization problem, where only approximation sequences are known instead of exa...
This paper considers a class of constrained optimization problems with a possi-bly nonconvex non-Lip...
We consider nonconvex constrained optimization problems and propose a new approach to the convergenc...
We consider nonconvex constrained optimization problems and propose a newapproach to the convergence...
This is a companion paper to "Ghost penalties in nonconvex constrained optimization: Diminishing ste...
This is a companion paper to “Ghost penalties in nonconvex constrained optimization: Diminishing st...
In this paper, we first extend the diminishing stepsize method for nonconvex constrained problems pr...
We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of ...
Abstract. The convergence behaviour of a class of iterative methods for solving the constrained mini...
Optimization problems arise in science, engineering, economy, etc. and we need to find the best sol...
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solv...
We introduce the concept of partially strictly monotone functions and apply it to construct a class ...
We present a global error bound for the projected gradient of nonconvex constrained optimization pro...
The non-linear programming problem seeks to maximize a function f(x) where the n component vector x ...
. Exact penalty methods for the solution of constrained optimization problems are based on the const...
We solve a general optimization problem, where only approximation sequences are known instead of exa...
This paper considers a class of constrained optimization problems with a possi-bly nonconvex non-Lip...