This paper considers the analysis of continuous time gradient-based optimization algorithms through the lens of nonlinear contraction theory. It demonstrates that in the case of a time-invariant objective, most elementary results on gradient descent based on convexity can be replaced by much more general results based on contraction. In particular, gradient descent converges to a unique equilibrium if its dynamics are contracting in any metric, with convexity of the cost corresponding to the special case of contraction in the identity metric. More broadly, contraction analysis provides new insights for the case of geodesically-convex optimization, wherein non-convex problems in Euclidean space can be transformed to convex ones posed over a ...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
This work studies the convergence of trajectories of gradient-like systems. In the first part of thi...
The standard assumption for proving linear convergence of first order methods for smooth convex opti...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
This paper describes new results linking constrained optimization theory and nonlinear contraction a...
We implement and test a globally convergent sequential approximate optimization algorithm based on (...
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloge...
In this paper, we present new second-order algorithms for composite convex optimization, called Cont...
This book presents state-of-the-art results and methodologies in modern global optimization, and has...
In a first part, we focus on gradient dynamical systems governed by non-smooth but also non-convex f...
It is known that for a strictly concave-convex function, the gradient method introduced by Arrow and...
The problem of learning from data is prevalent in the modern scientific age, and optimization provid...
As machine learning has more closely interacted with optimization, the concept of convexity has loom...
Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those met...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
This work studies the convergence of trajectories of gradient-like systems. In the first part of thi...
The standard assumption for proving linear convergence of first order methods for smooth convex opti...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
This paper describes new results linking constrained optimization theory and nonlinear contraction a...
We implement and test a globally convergent sequential approximate optimization algorithm based on (...
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloge...
In this paper, we present new second-order algorithms for composite convex optimization, called Cont...
This book presents state-of-the-art results and methodologies in modern global optimization, and has...
In a first part, we focus on gradient dynamical systems governed by non-smooth but also non-convex f...
It is known that for a strictly concave-convex function, the gradient method introduced by Arrow and...
The problem of learning from data is prevalent in the modern scientific age, and optimization provid...
As machine learning has more closely interacted with optimization, the concept of convexity has loom...
Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those met...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
This work studies the convergence of trajectories of gradient-like systems. In the first part of thi...
The standard assumption for proving linear convergence of first order methods for smooth convex opti...