In this thesis, we focus on convergence performance of first-order methods to compute an $\epsilon$-approximate solution of minimizing convex smooth function $f$ at the $N$-th iteration. In our introduction of the above research question, we first introduce the gradient descent method with constant step size $h=1/L$. The gradient descent method has a $\mathcal{O}(L^2\|x_0-x^*\|^2/\epsilon)$ convergence with respect to $\|\nabla f(x_N)\|^2$. Next we introduce Nesterov’s accelerated gradient method, which has an $\mathcal{O}(L\|x_0-x^*\|\sqrt{1/\epsilon})$ complexity in terms of $\|\nabla f(x_N)\|^2$. The convergence performance of Nesterov’s accelerated gradient method is much better than that of the gradient descent method but still not opt...
We provide a novel accelerated first-order method that achieves the asymptotically optimal convergen...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
31 p. ; mars 2022An Adagrad-inspired class of algorithms for smooth unconstrained optimization is pr...
In this thesis, we focus on convergence performance of first-order methods to compute an $\epsilon$-...
The usual approach to developing and analyzing first-order methods for smooth convex optimization as...
First-order methods play a central role in large-scale convex optimization. Despite their various fo...
The primary concern of this thesis is to explore efficient first-order methods of computing approxim...
The primary concern of this thesis is to explore efficient first-order methods of computing approxim...
First-order methods for convex and nonconvex optimization have been an important research topic in t...
This thesis focuses on three themes related to the mathematical theory of first-order methods for co...
In this paper, we revisit the smooth and strongly-convex-strongly-concave minimax optimization probl...
Polyak-{\L}ojasiewicz (PL) [Polyak, 1963] condition is a weaker condition than the strong convexity ...
Code available at https://github.com/AdrienTaylor/GreedyMethodsInternational audienceWe describe a n...
We provide a novel accelerated first-order method that achieves the asymptotically optimal convergen...
In this note we present tight lower bounds on the information-based complexity of large-scale smooth...
We provide a novel accelerated first-order method that achieves the asymptotically optimal convergen...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
31 p. ; mars 2022An Adagrad-inspired class of algorithms for smooth unconstrained optimization is pr...
In this thesis, we focus on convergence performance of first-order methods to compute an $\epsilon$-...
The usual approach to developing and analyzing first-order methods for smooth convex optimization as...
First-order methods play a central role in large-scale convex optimization. Despite their various fo...
The primary concern of this thesis is to explore efficient first-order methods of computing approxim...
The primary concern of this thesis is to explore efficient first-order methods of computing approxim...
First-order methods for convex and nonconvex optimization have been an important research topic in t...
This thesis focuses on three themes related to the mathematical theory of first-order methods for co...
In this paper, we revisit the smooth and strongly-convex-strongly-concave minimax optimization probl...
Polyak-{\L}ojasiewicz (PL) [Polyak, 1963] condition is a weaker condition than the strong convexity ...
Code available at https://github.com/AdrienTaylor/GreedyMethodsInternational audienceWe describe a n...
We provide a novel accelerated first-order method that achieves the asymptotically optimal convergen...
In this note we present tight lower bounds on the information-based complexity of large-scale smooth...
We provide a novel accelerated first-order method that achieves the asymptotically optimal convergen...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
31 p. ; mars 2022An Adagrad-inspired class of algorithms for smooth unconstrained optimization is pr...