The strong growth condition (SGC) is known to be a sufficient condition for linear convergence of the stochastic gradient method using a constant step-size γ (SGM-CS). In this paper, we provide a necessary condition, for the linear convergence of SGM-CS, that is weaker than SGC. Moreover, when this necessary is violated up to a additive perturbation σ, we show that both the projected stochastic gradient method using a constant step-size, under the restricted strong convexity assumption, and the proximal stochastic gradient method, under the strong convexity assumption, exhibit linear convergence to a noise dominated region, whose distance to the optimal solution is proportional to γσ
International audienceWe consider the least-squares regression problem and provide a detailed asympt...
Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization pr...
Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which i...
We consider optimizing a function smooth convex function $f$ that is the average of a set of differe...
We show that the basic stochastic gradient method applied to a strongly-convex differentiable functi...
We consider the minimization of an objective function given access to unbiased estimates of its grad...
We consider the optimization of a smooth and strongly convex objective using constant step-size stoc...
We study the convergence of accelerated stochastic gradient descent for strongly convex objectives u...
We design step-size schemes that make stochastic gradient descent (SGD) adaptive to (i) the noise σ ...
We aim to make stochastic gradient descent (SGD) adaptive to (i) the noise $\sigma^2$ in the stochas...
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
In this paper, we study a stochastic strongly convex optimization problem and propose three classes ...
International audienceRecent studies have provided both empirical and theoretical evidence illustrat...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
International audienceWe consider the least-squares regression problem and provide a detailed asympt...
Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization pr...
Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which i...
We consider optimizing a function smooth convex function $f$ that is the average of a set of differe...
We show that the basic stochastic gradient method applied to a strongly-convex differentiable functi...
We consider the minimization of an objective function given access to unbiased estimates of its grad...
We consider the optimization of a smooth and strongly convex objective using constant step-size stoc...
We study the convergence of accelerated stochastic gradient descent for strongly convex objectives u...
We design step-size schemes that make stochastic gradient descent (SGD) adaptive to (i) the noise σ ...
We aim to make stochastic gradient descent (SGD) adaptive to (i) the noise $\sigma^2$ in the stochas...
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
In this paper, we study a stochastic strongly convex optimization problem and propose three classes ...
International audienceRecent studies have provided both empirical and theoretical evidence illustrat...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
International audienceWe consider the least-squares regression problem and provide a detailed asympt...
Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization pr...
Constant step-size Stochastic Gradient Descent exhibits two phases: a transient phase during which i...