This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-Lojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with Hölderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of ...
This thesis focuses on three themes related to the mathematical theory of first-order methods for co...
Code available at https://github.com/AdrienTaylor/GreedyMethodsInternational audienceWe describe a n...
In this paper we prove that the broad class of direct-search methods of directional type, based on i...
This paper shows that error bounds can be used as effective tools for deriving complexity results fo...
Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comp...
This is a short tutorial on complexity studies for differentiable convex optimization. A complexity ...
Motivated by recent work of Renegar, we present new computational methods and associated computation...
The study of first-order optimization is sensitive to the assumptions made on the objective function...
In this paper, we study the Kurdyka–Łojasiewicz (KL) exponent, an important quantity for analyzing t...
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. C...
Convex optimization, the study of minimizing convex functions over convex sets, is host to a multit...
We analyze worst-case convergence guarantees of first-order optimization methods over a function cla...
In this talk, we present a new framework for establishing error bounds for a class of structured con...
In many machine learning problems such as the dual form of SVM, the objective function to be minimiz...
We describe a steepest-descent potential reduction method for linear and convex minimization over a ...
This thesis focuses on three themes related to the mathematical theory of first-order methods for co...
Code available at https://github.com/AdrienTaylor/GreedyMethodsInternational audienceWe describe a n...
In this paper we prove that the broad class of direct-search methods of directional type, based on i...
This paper shows that error bounds can be used as effective tools for deriving complexity results fo...
Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comp...
This is a short tutorial on complexity studies for differentiable convex optimization. A complexity ...
Motivated by recent work of Renegar, we present new computational methods and associated computation...
The study of first-order optimization is sensitive to the assumptions made on the objective function...
In this paper, we study the Kurdyka–Łojasiewicz (KL) exponent, an important quantity for analyzing t...
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. C...
Convex optimization, the study of minimizing convex functions over convex sets, is host to a multit...
We analyze worst-case convergence guarantees of first-order optimization methods over a function cla...
In this talk, we present a new framework for establishing error bounds for a class of structured con...
In many machine learning problems such as the dual form of SVM, the objective function to be minimiz...
We describe a steepest-descent potential reduction method for linear and convex minimization over a ...
This thesis focuses on three themes related to the mathematical theory of first-order methods for co...
Code available at https://github.com/AdrienTaylor/GreedyMethodsInternational audienceWe describe a n...
In this paper we prove that the broad class of direct-search methods of directional type, based on i...