We propose a perturbed gradient algorithm with stochastic noises to solve a general class of optimization problems. We provide a convergence proof for this algorithm, under classical assumptions on the descent direction, and new assumptions on the stochastic noises. Instead of requiring the stochastic noises to correspond to martingale increments, we only require these noises to be asymptotically so. Furthermore, the variance of these noises is allowed to grow innitely under the control of a decreasing sequence linked with the gradient stepsizes. We then compare this new approach and assumptions with classical ones in the stochastic approximation literature. As an application of this general setting, we show how the algorithm to solve innit...
We consider a stochastic version of the proximal point algorithm for convex optimization problems po...
International audienceIn view of solving convex optimization problems with noisy gradient input, we ...
Convergence of a projected stochastic gradient algorithm is demonstrated for convex objective functi...
We focus on solving closed-loop stochastic problems, and propose a perturbed gradient algorithm to a...
We focus on solving closed-loop stochastic problems, and propose a perturbed gradient algorithm to a...
We develop the mathematical foundations of the stochastic modified equations (SME) framework for ana...
We consider in this work small random perturbations (of multiplicative noise type) of the gradient f...
We design step-size schemes that make stochastic gradient descent (SGD) adaptive to (i) the noise σ ...
Stochastic Approximation (SA) is a classical algorithm that has had since the early days a huge impa...
Abstract: Stochastic gradient descent is an optimisation method that combines classical gradient des...
Consider a probability measure on a Hilbert space defined via its density with respect to a Gaussian...
In view of solving convex optimization problems with noisy gradient input, we analyze the asymptotic...
AbstractWe propose a stochastic gradient descent algorithm for learning the gradient of a regression...
We introduce a general framework for nonlinear stochastic gradient descent (SGD) for the scenarios w...
International audienceIn this paper, a general stochastic optimization procedure is studied, unifyin...
We consider a stochastic version of the proximal point algorithm for convex optimization problems po...
International audienceIn view of solving convex optimization problems with noisy gradient input, we ...
Convergence of a projected stochastic gradient algorithm is demonstrated for convex objective functi...
We focus on solving closed-loop stochastic problems, and propose a perturbed gradient algorithm to a...
We focus on solving closed-loop stochastic problems, and propose a perturbed gradient algorithm to a...
We develop the mathematical foundations of the stochastic modified equations (SME) framework for ana...
We consider in this work small random perturbations (of multiplicative noise type) of the gradient f...
We design step-size schemes that make stochastic gradient descent (SGD) adaptive to (i) the noise σ ...
Stochastic Approximation (SA) is a classical algorithm that has had since the early days a huge impa...
Abstract: Stochastic gradient descent is an optimisation method that combines classical gradient des...
Consider a probability measure on a Hilbert space defined via its density with respect to a Gaussian...
In view of solving convex optimization problems with noisy gradient input, we analyze the asymptotic...
AbstractWe propose a stochastic gradient descent algorithm for learning the gradient of a regression...
We introduce a general framework for nonlinear stochastic gradient descent (SGD) for the scenarios w...
International audienceIn this paper, a general stochastic optimization procedure is studied, unifyin...
We consider a stochastic version of the proximal point algorithm for convex optimization problems po...
International audienceIn view of solving convex optimization problems with noisy gradient input, we ...
Convergence of a projected stochastic gradient algorithm is demonstrated for convex objective functi...