International audienceIn a learning context, data distribution are usually unknown. Observation models are also sometimes complex. In an inverse problem setup, these facts often lead to the minimization of a loss function with uncertain analytic expression. Consequently, its gradient cannot be evaluated in an exact manner. These issues have has promoted the development of so-called stochastic optimization methods, which are able to cope with stochastic errors in the gradient term. A natural strategy is to start from a deterministic optimization approach as a baseline, and to incorporate a stabilization procedure (e.g., decreasing stepsize, averaging) that yields improved robustness to stochastic errors. In the context of large-scale, differ...
Abstract. Majorization-minimization algorithms consist of successively minimizing a sequence of uppe...
International audienceMajorization-minimization algorithms consist of successively minimizing a sequ...
The majorize-minimize (MM) optimization technique has received considerable attention in signal and ...
International audienceIn a learning context, data distribution are usually unknown. Observation mode...
International audienceA wide class of problems involves the minimization of a coercive and different...
International audienceStochastic optimization plays an important role in solving many problems encou...
International audienceIn this paper, we propose a version of the MM Subspace algorithm in a stochast...
International audienceState-of-the-art methods for solving smooth optimization problems are nonlinea...
International audienceMajorization-minimization algorithms consist of iteratively minimizing a major...
AbstractIn this paper, a stochastic gradient descent algorithm is proposed for the binary classifica...
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Majorization...
International audienceComplex-valued data are encountered in many application areas of signal and im...
A simple optimization principle f (θ)g(θ) b κ Objective: min θ∈Θ f (θ) Principle called Majorization...
Abstract. Majorization-minimization algorithms consist of successively minimizing a sequence of uppe...
International audienceMajorization-minimization algorithms consist of successively minimizing a sequ...
The majorize-minimize (MM) optimization technique has received considerable attention in signal and ...
International audienceIn a learning context, data distribution are usually unknown. Observation mode...
International audienceA wide class of problems involves the minimization of a coercive and different...
International audienceStochastic optimization plays an important role in solving many problems encou...
International audienceIn this paper, we propose a version of the MM Subspace algorithm in a stochast...
International audienceState-of-the-art methods for solving smooth optimization problems are nonlinea...
International audienceMajorization-minimization algorithms consist of iteratively minimizing a major...
AbstractIn this paper, a stochastic gradient descent algorithm is proposed for the binary classifica...
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Majorization...
International audienceComplex-valued data are encountered in many application areas of signal and im...
A simple optimization principle f (θ)g(θ) b κ Objective: min θ∈Θ f (θ) Principle called Majorization...
Abstract. Majorization-minimization algorithms consist of successively minimizing a sequence of uppe...
International audienceMajorization-minimization algorithms consist of successively minimizing a sequ...
The majorize-minimize (MM) optimization technique has received considerable attention in signal and ...