International audienceStochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering ...
Gradient-based optimization algorithms, in particular their stochastic counterparts, have become by ...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
41 pages, 2 figuresThis paper analyzes the convergence for a large class of Riemannian stochastic ap...
International audienceStochastic approximation (SA) is a key method used in statistical learning. Re...
International audienceStochastic approximation (SA) is a classical algorithm that has had since the ...
In this paper, we consider the minimization of a convex objective function defined on a Hilbert spac...
Asynchronous stochastic approximations (SAs) are an important class of model-free algorithms, tools,...
Abstract. We consider a discrete time, ®nite state Markov reward process that depends on a set of pa...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
We consider the stochastic approximation problem in a streaming framework where an objective is mini...
Abstract. Stochastic-approximation gradient methods are attractive for large-scale convex optimizati...
ABSTRACT. This papers presents an overview of gradient based methods for minimization of noisy func-...
This papers presents an overview of gradient based methods for minimization of noisy functions. It i...
Gradient-based optimization algorithms, in particular their stochastic counterparts, have become by ...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
41 pages, 2 figuresThis paper analyzes the convergence for a large class of Riemannian stochastic ap...
International audienceStochastic approximation (SA) is a key method used in statistical learning. Re...
International audienceStochastic approximation (SA) is a classical algorithm that has had since the ...
In this paper, we consider the minimization of a convex objective function defined on a Hilbert spac...
Asynchronous stochastic approximations (SAs) are an important class of model-free algorithms, tools,...
Abstract. We consider a discrete time, ®nite state Markov reward process that depends on a set of pa...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
We consider the stochastic approximation problem in a streaming framework where an objective is mini...
Abstract. Stochastic-approximation gradient methods are attractive for large-scale convex optimizati...
ABSTRACT. This papers presents an overview of gradient based methods for minimization of noisy func-...
This papers presents an overview of gradient based methods for minimization of noisy functions. It i...
Gradient-based optimization algorithms, in particular their stochastic counterparts, have become by ...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
41 pages, 2 figuresThis paper analyzes the convergence for a large class of Riemannian stochastic ap...