Stochastic gradient descent (SGD) is commonly used in solving finite sum optimization problems. The main parameter one needs to choose for this algorithm is the step-size. Adaptive step sizes are particularly appealing as generally they do not rely on the parameter of functions such as smoothness constant or strong convexity constant which is required to be known in advance in guaranteeing the convergence of SGD with constant step size. Loizou et al. [1] and Horváth et al. [2] proposed novel adaptive step-sizes, SPS and StoPS, which can be seen as the stochastic variants of the classical Polyak step-size (PS). In this thesis, we provide a new viewpoint and analyze SPS and StoPS via operator theory: we no longer analyze the step-size individ...
We aim to make stochastic gradient descent (SGD) adaptive to (i) the noise $\sigma^2$ in the stochas...
We consider the minimization of an objective function given access to unbiased estimates of its grad...
This thesis is concerned with stochastic optimization methods. The pioneering work in the field is t...
Recently, Loizou et al. (2021), proposed and analyzed stochastic gradient descent (SGD) with stochas...
Tuning the step size of stochastic gradient descent is tedious and error prone. This has motivated t...
The recently proposed stochastic Polyak stepsize (SPS) and stochastic linesearch (SLS) for SGD have ...
We consider the optimization of a smooth and strongly convex objective using constant step-size stoc...
This paper revisits the Polyak step size schedule for convex optimization problems, proving that a s...
where the function f or its gradient rf are not directly accessible except through Monte Carlo estim...
We design step-size schemes that make stochastic gradient descent (SGD) adaptive to (i) the noise σ ...
Stochastic Gradient Descent (SGD) is a popular tool in training large-scale machine learning models....
In this dissertation, a theoretical framework based on concentration inequalities for empirical proc...
A stochastic approximation (SA) algorithm with new adaptive step sizes for solving unconstrained mi...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
This dissertation investigates the use of sampling methods for solving stochastic optimization probl...
We aim to make stochastic gradient descent (SGD) adaptive to (i) the noise $\sigma^2$ in the stochas...
We consider the minimization of an objective function given access to unbiased estimates of its grad...
This thesis is concerned with stochastic optimization methods. The pioneering work in the field is t...
Recently, Loizou et al. (2021), proposed and analyzed stochastic gradient descent (SGD) with stochas...
Tuning the step size of stochastic gradient descent is tedious and error prone. This has motivated t...
The recently proposed stochastic Polyak stepsize (SPS) and stochastic linesearch (SLS) for SGD have ...
We consider the optimization of a smooth and strongly convex objective using constant step-size stoc...
This paper revisits the Polyak step size schedule for convex optimization problems, proving that a s...
where the function f or its gradient rf are not directly accessible except through Monte Carlo estim...
We design step-size schemes that make stochastic gradient descent (SGD) adaptive to (i) the noise σ ...
Stochastic Gradient Descent (SGD) is a popular tool in training large-scale machine learning models....
In this dissertation, a theoretical framework based on concentration inequalities for empirical proc...
A stochastic approximation (SA) algorithm with new adaptive step sizes for solving unconstrained mi...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
This dissertation investigates the use of sampling methods for solving stochastic optimization probl...
We aim to make stochastic gradient descent (SGD) adaptive to (i) the noise $\sigma^2$ in the stochas...
We consider the minimization of an objective function given access to unbiased estimates of its grad...
This thesis is concerned with stochastic optimization methods. The pioneering work in the field is t...