Abstract The proximal gradient algorithm is an appealing approach in finding solutions of non-smooth composite optimization problems, which may only has weak convergence in the infinite-dimensional setting. In this paper, we introduce a modified proximal gradient algorithm with outer perturbations in Hilbert space and prove that the algorithm converges strongly to a solution of the composite optimization problem. We also discuss the bounded perturbation resilience of the basic algorithm of this iterative scheme and illustrate it with an application
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of ...
Composite optimization models consist of the minimization of the sum of a smooth (not necessarily co...
We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate ...
We address composite optimization problems, which consist in minimizing thesum of a smooth and a mer...
Abstract We study the extension of the proximal gradient algorithm where only a stochastic gradient...
In machine learning research, the proximal gradient methods are popular for solving various optimiza...
We investigate projected scaled gradient (PSG) methods for convex minimization problems. These metho...
In the present paper, we investigate a linearized p roximal algorithm (LPA) for solving a convex com...
In a Hilbert space $H$, based on inertial dynamics with dry friction damping, we introduce a new cla...
In a Hilbert space $H$, based on inertial dynamics with dry friction damping, we introduce a new cla...
Decentralized optimization is a powerful paradigm that finds applications in engineering and learnin...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of ...
Composite optimization models consist of the minimization of the sum of a smooth (not necessarily co...
We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate ...
We address composite optimization problems, which consist in minimizing thesum of a smooth and a mer...
Abstract We study the extension of the proximal gradient algorithm where only a stochastic gradient...
In machine learning research, the proximal gradient methods are popular for solving various optimiza...
We investigate projected scaled gradient (PSG) methods for convex minimization problems. These metho...
In the present paper, we investigate a linearized p roximal algorithm (LPA) for solving a convex com...
In a Hilbert space $H$, based on inertial dynamics with dry friction damping, we introduce a new cla...
In a Hilbert space $H$, based on inertial dynamics with dry friction damping, we introduce a new cla...
Decentralized optimization is a powerful paradigm that finds applications in engineering and learnin...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...
International audienceMotivated by applications in statistical inference, we propose two versions of...