International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate gradient-based optimization algorithms. The proposed scheme, called QNing can be notably applied to incremental first-order methods such as the stochastic variance-reduced gradient descent algorithm (SVRG) and other randomized incremental optimization algorithms. QNing is also compatible with composite objectives, meaning that it has the ability to provide exactly sparse solutions when the objective involves a sparsity-inducing regularization. When combined with limited-memory BFGS rules, QNing is particularly effective to solve high-dimensional optimization problems, while enjoying a worst-case linear convergence rate for strongly convex prob...
We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, com...
International audienceIn this paper, we investigate the attractive properties of the proximal gradie...
Regularized risk minimization often involves non-smooth optimization, either be-cause of the loss fu...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
Proximal methods are known to identify the underlying substructure of nonsmooth optimization problem...
main paper (9 pages) + appendix (21 pages)International audienceWe introduce a generic scheme for ac...
International audienceMany applications in machine learning or signal processing involve nonsmooth o...
In the first chapter of this thesis, we analyze the global convergence rate of a proximal quasi-Newt...
Recently several methods were proposed for sparse optimization which make careful use of second-orde...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
International audienceWe introduce a framework for quasi-Newton forward--backward splitting algorith...
This thesis aims at developing efficient algorithms for solving some fundamental engineering problem...
Optimization problems arise naturally in machine learning for supervised problems. A typical example...
Gradient boosting is a prediction method that iteratively combines weak learners to produce a comple...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, com...
International audienceIn this paper, we investigate the attractive properties of the proximal gradie...
Regularized risk minimization often involves non-smooth optimization, either be-cause of the loss fu...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
Proximal methods are known to identify the underlying substructure of nonsmooth optimization problem...
main paper (9 pages) + appendix (21 pages)International audienceWe introduce a generic scheme for ac...
International audienceMany applications in machine learning or signal processing involve nonsmooth o...
In the first chapter of this thesis, we analyze the global convergence rate of a proximal quasi-Newt...
Recently several methods were proposed for sparse optimization which make careful use of second-orde...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
International audienceWe introduce a framework for quasi-Newton forward--backward splitting algorith...
This thesis aims at developing efficient algorithms for solving some fundamental engineering problem...
Optimization problems arise naturally in machine learning for supervised problems. A typical example...
Gradient boosting is a prediction method that iteratively combines weak learners to produce a comple...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, com...
International audienceIn this paper, we investigate the attractive properties of the proximal gradie...
Regularized risk minimization often involves non-smooth optimization, either be-cause of the loss fu...