International audienceWe consider the minimization of a differentiable Lipschitz gradient but non necessarily convex, function F defined on R N. We propose an accelerated gradient descent approach which combines three strategies, namely (i) a variable metric derived from the majorization-minimization principle ; (ii) a subspace strategy incorporating information from the past iterates ; (iii) a block alternating update. Under the assumption that F satisfies the Kurdyka-Łojasiewicz property, we give conditions under which the sequence generated by the resulting block majorize-minimize subspace algorithm converges to a critical point of the objective function, and we exhibit convergence rates for its iterates
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
This paper deals with gradient methods for minimizing n-dimensional strictly convex quadratic functi...
International audienceIn view of solving nonsmooth and nonconvex problems involving complex constrai...
We consider the minimization of a differentiable Lipschitz gradient but non necessarily convex, func...
International audienceState-of-the-art methods for solving smooth optimization problems are nonlinea...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Majorization...
International audienceA wide class of problems involves the minimization of a coercive and different...
Abstract. This paper develops convergence theory of the gradient projection method by Calamai and Mo...
We consider alternating minimization procedures for convex and non-convex optimization problems with...
International audienceIn a learning context, data distribution are usually unknown. Observation mode...
AbstractIn this paper the development, convergence theory and numerical testing of a class of gradie...
In this paper, we give some convergence results on the gradient projection method with exact stepsiz...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
This paper deals with gradient methods for minimizing n-dimensional strictly convex quadratic functi...
International audienceIn view of solving nonsmooth and nonconvex problems involving complex constrai...
We consider the minimization of a differentiable Lipschitz gradient but non necessarily convex, func...
International audienceState-of-the-art methods for solving smooth optimization problems are nonlinea...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Majorization...
International audienceA wide class of problems involves the minimization of a coercive and different...
Abstract. This paper develops convergence theory of the gradient projection method by Calamai and Mo...
We consider alternating minimization procedures for convex and non-convex optimization problems with...
International audienceIn a learning context, data distribution are usually unknown. Observation mode...
AbstractIn this paper the development, convergence theory and numerical testing of a class of gradie...
In this paper, we give some convergence results on the gradient projection method with exact stepsiz...
We prove the convergence to minima and estimates on the rate of convergence for the stochastic gradi...
This paper deals with gradient methods for minimizing n-dimensional strictly convex quadratic functi...
International audienceIn view of solving nonsmooth and nonconvex problems involving complex constrai...