The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function F(w)=1n∑ni=1fi(w) has been proven to be at least Ω(n−−√/ϵ) for n≤O(ϵ−2) where ϵ denotes the attained accuracy E[∥∇F(w~)∥2]≤ϵ for the outputted approximation w~ (Fang et al., 2018). In this paper, we provide a convergence analysis for a slightly modified version of the SARAH algorithm (Nguyen et al., 2017a;b) and achieve total complexity that matches the lower-bound worst case complexity in (Fang et al., 2018) up to a constant factor when n≤O(ϵ−2) for nonconvex problems. For convex optimization, we propose SARAH++ with subl...
Stochastic compositional optimization arises in many important machine learning applications. The ob...
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex p...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...
Revision from January 2015 submission. Major changes: updated literature follow and discussion of su...
Finite-sum optimization plays an important role in the area of machine learning, and hence has trigg...
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
In this paper, we develop stochastic variance reduced algorithms for solving a class of finite-sum m...
We study the complexity of producing $(\delta,\epsilon)$-stationary points of Lipschitz objectives w...
We study the problem of finding a near-stationary point for smooth minimax optimization. The recent ...
We develop a line-search second-order algorithmic framework for minimizing finite sums. We do not ma...
The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper deve...
While variance reduction methods have shown great success in solving large scale optimization proble...
International audienceThe Expectation Maximization (EM) algorithm is a key reference for inference i...
This thesis aims at developing efficient algorithms for solving complex and constrained convex optim...
We study the complexity of finding the global solution to stochastic nonconvex optimization when the...
Stochastic compositional optimization arises in many important machine learning applications. The ob...
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex p...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...
Revision from January 2015 submission. Major changes: updated literature follow and discussion of su...
Finite-sum optimization plays an important role in the area of machine learning, and hence has trigg...
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
In this paper, we develop stochastic variance reduced algorithms for solving a class of finite-sum m...
We study the complexity of producing $(\delta,\epsilon)$-stationary points of Lipschitz objectives w...
We study the problem of finding a near-stationary point for smooth minimax optimization. The recent ...
We develop a line-search second-order algorithmic framework for minimizing finite sums. We do not ma...
The non-smooth finite-sum minimization is a fundamental problem in machine learning. This paper deve...
While variance reduction methods have shown great success in solving large scale optimization proble...
International audienceThe Expectation Maximization (EM) algorithm is a key reference for inference i...
This thesis aims at developing efficient algorithms for solving complex and constrained convex optim...
We study the complexity of finding the global solution to stochastic nonconvex optimization when the...
Stochastic compositional optimization arises in many important machine learning applications. The ob...
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex p...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...