We consider nonconvex constrained optimization problems and propose a newapproach to the convergence analysis based on penalty functions. We make use of classicalpenalty functions in an unconventional way, in that penalty functions only enter in thetheoretical analysis of convergence while the algorithm itself is penalty free. Based on thisidea, we are able to establish several new results, including thefirst general analysis fordiminishing stepsize methods in nonconvex, constrained optimization, showing con-vergence to generalized stationary points, and a complexity study for sequential quadraticprogramming–type algorithms
We solve a general optimization problem, where only approximation sequences are known instead of exa...
The non-linear programming problem seeks to maximize a function f(x) where the n component vector x ...
Abstract. Nonconvex optimization problems arise in many areas of computational science and engineeri...
We consider nonconvex constrained optimization problems and propose a new approach to the convergenc...
This is a companion paper to "Ghost penalties in nonconvex constrained optimization: Diminishing ste...
This is a companion paper to “Ghost penalties in nonconvex constrained optimization: Diminishing st...
In this paper, we first extend the diminishing stepsize method for nonconvex constrained problems pr...
We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of ...
Optimization problems arise in science, engineering, economy, etc. and we need to find the best sol...
Abstract. The convergence behaviour of a class of iterative methods for solving the constrained mini...
We present a global error bound for the projected gradient of nonconvex constrained optimization pro...
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solv...
International audienceIn order to be provably convergent towards a second-order stationary point, op...
We introduce the concept of partially strictly monotone functions and apply it to construct a class ...
In this paper we develop a general convergence theory for nonmonotone line searches in optimization ...
We solve a general optimization problem, where only approximation sequences are known instead of exa...
The non-linear programming problem seeks to maximize a function f(x) where the n component vector x ...
Abstract. Nonconvex optimization problems arise in many areas of computational science and engineeri...
We consider nonconvex constrained optimization problems and propose a new approach to the convergenc...
This is a companion paper to "Ghost penalties in nonconvex constrained optimization: Diminishing ste...
This is a companion paper to “Ghost penalties in nonconvex constrained optimization: Diminishing st...
In this paper, we first extend the diminishing stepsize method for nonconvex constrained problems pr...
We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of ...
Optimization problems arise in science, engineering, economy, etc. and we need to find the best sol...
Abstract. The convergence behaviour of a class of iterative methods for solving the constrained mini...
We present a global error bound for the projected gradient of nonconvex constrained optimization pro...
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solv...
International audienceIn order to be provably convergent towards a second-order stationary point, op...
We introduce the concept of partially strictly monotone functions and apply it to construct a class ...
In this paper we develop a general convergence theory for nonmonotone line searches in optimization ...
We solve a general optimization problem, where only approximation sequences are known instead of exa...
The non-linear programming problem seeks to maximize a function f(x) where the n component vector x ...
Abstract. Nonconvex optimization problems arise in many areas of computational science and engineeri...