International audienceState-of-the-art methods for solving smooth optimization problems are nonlinear conjugate gradient, low memory BFGS, and majorize-minimize (MM) subspace algorithms. The MM subspace algorithm that has been introduced more recently has shown good practical performance when compared with other methods on various optimization problems arising in signal and image processing. However, to the best of our knowledge, no general result exists concerning the theoretical convergence rate of the MM subspace algorithm. This paper aims at deriving such convergence rates both for batch and online versions of the and in particular, discusses the influence of the choice of the subspace