We show that the basic stochastic gradient method applied to a strongly-convex differentiable function with a constant step-size achieves a linear convergence rate (in function value and iterates) up to a constant proportional the step-size (under standard assumptions on the gradient).Science, Faculty ofComputer Science, Department ofUnreviewedFacult
Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization pr...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
We consider optimizing a function smooth convex function $f$ that is the average of a set of differe...
With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm...
International audienceThis paper studies the asymptotic behavior of the constant step Stochastic Gra...
Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which i...
The strong growth condition (SGC) is known to be a sufficient condition for linear convergence of th...
An usual problem in statistics consists in estimating the minimizer of a convex function. When we ha...
An usual problem in statistics consists in estimating the minimizer of a convex function. When we ha...
The vast majority of convergence rates analysis for stochastic gradient methods in the literature fo...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessa...
Abstract: Stochastic gradient descent is an optimisation method that combines classical gradient des...
We extend the previous analysis of Schmidt et al. [2011] to derive the linear convergence rate obtai...
Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization pr...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
We consider optimizing a function smooth convex function $f$ that is the average of a set of differe...
With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm...
International audienceThis paper studies the asymptotic behavior of the constant step Stochastic Gra...
Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which i...
The strong growth condition (SGC) is known to be a sufficient condition for linear convergence of th...
An usual problem in statistics consists in estimating the minimizer of a convex function. When we ha...
An usual problem in statistics consists in estimating the minimizer of a convex function. When we ha...
The vast majority of convergence rates analysis for stochastic gradient methods in the literature fo...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessa...
Abstract: Stochastic gradient descent is an optimisation method that combines classical gradient des...
We extend the previous analysis of Schmidt et al. [2011] to derive the linear convergence rate obtai...
Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization pr...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...