IIn this paper, we extend the framework of the convergence ofstochastic approximations. Such a procedure is used in many methods such as parameters estimation inside a Metropolis Hastings algorithm, stochastic gradient descent or stochastic Expectation Maximization algorithm. It is given by θ n+1 = θn + ∆ n+1 H θn (X n+1) , where (Xn)n∈N is a sequence of random variables following a parametric distribution which depends on (θn)n∈N, and (∆n)n∈N is a step sequence. The convergence of such a stochastic approximation has already been proved under an assumption of geometric ergodicity of the Markov dynamic. However, in many practical situations this hypothesis is not satisfied, for instance for any heavy tail target distributi...