Stochastic variance reduced gradient (SVRG) is a popular variance reduction technique for accelerating stochastic gradient descent (SGD). We provide a first analysis of the method for solving a class of linear inverse problems in the lens of the classical regularization theory. We prove that for a suitable constant step size schedule, the method can achieve an optimal convergence rate in terms of the noise level (under suitable regularity condition) and the variance of the SVRG iterate error is smaller than that by SGD. These theoretical findings are corroborated by a set of numerical experiments
Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of S...
International audienceStochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its ...
17 pages, 2 figures, 1 tableInternational audienceOur goal is to improve variance reducing stochasti...
Stochastic variance reduced gradient (SVRG) is a popular variance reduction technique for accelerati...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
In this work we investigate the practicability of stochastic gradient descent and recently introduce...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems due...
Stochastic gradient descent (SGD) and its variants are among the most successful approaches for solv...
In this work we investigate the practicality of stochastic gradient descent and its variants with va...
International audienceAmongst the very first variance reduced stochastic methods for solving the emp...
This paper provides a framework to analyze stochastic gradient algorithms in a mean squared error (M...
We develop the mathematical foundations of the stochastic modified equations (SME) framework for ana...
© 1989-2012 IEEE. In this paper, we propose a simple variant of the original SVRG, called variance r...
Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optim...
Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of S...
International audienceStochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its ...
17 pages, 2 figures, 1 tableInternational audienceOur goal is to improve variance reducing stochasti...
Stochastic variance reduced gradient (SVRG) is a popular variance reduction technique for accelerati...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
In this work we investigate the practicability of stochastic gradient descent and recently introduce...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems due...
Stochastic gradient descent (SGD) and its variants are among the most successful approaches for solv...
In this work we investigate the practicality of stochastic gradient descent and its variants with va...
International audienceAmongst the very first variance reduced stochastic methods for solving the emp...
This paper provides a framework to analyze stochastic gradient algorithms in a mean squared error (M...
We develop the mathematical foundations of the stochastic modified equations (SME) framework for ana...
© 1989-2012 IEEE. In this paper, we propose a simple variant of the original SVRG, called variance r...
Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optim...
Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of S...
International audienceStochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its ...
17 pages, 2 figures, 1 tableInternational audienceOur goal is to improve variance reducing stochasti...