International audienceMotivated by penalized likelihood maximization in complex models, we study optimization problems where neither the function to optimize nor its gradient have an explicit expression, but its gradient can be approximated by a Monte Carlo technique. We propose a new algorithm based on a stochastic approximation of the Proximal-Gradient (PG) algorithm. This new algorithm, named Stochastic Approximation PG (SAPG) is the combination of a stochastic gradient descent step which-roughly speaking-computes a smoothed approximation of the past gradient along the iterations, and a proximal step. The choice of the step size and the Monte Carlo batch size for the stochastic gradient descent step in SAPG are discussed. Our convergence...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
Abstract We consider the problem of minimizing the sum of two convex functions: one is the average o...
Abstract. We study a stochastic version of the proximal gradient algorithm where the gradient is ana...
Abstract. We study a stochastic version of the proximal gradient algorithm where the gradient is ana...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
International audienceMotivated by penalized likelihood maximization in complex models, we study opt...
Abstract We consider the problem of minimizing the sum of two convex functions: one is the average o...
Abstract. We study a stochastic version of the proximal gradient algorithm where the gradient is ana...
Abstract. We study a stochastic version of the proximal gradient algorithm where the gradient is ana...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...