International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate gradient-based optimization algorithms. The proposed scheme, called QNing can be notably applied to incremental first-order methods such as the stochastic variance-reduced gradient descent algorithm (SVRG) and other randomized incremental optimization algorithms. QNing is also compatible with composite objectives, meaning that it has the ability to provide exactly sparse solutions when the objective involves a sparsity-inducing regularization. When combined with limited-memory BFGS rules, QNing is particularly effective to solve high-dimensional optimization problems, while enjoying a worst-case linear convergence rate for strongly convex prob...
A new family of numerically efficient variable metric or quasi-Newton methods for unconstrained opti...
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities...
We consider projected Newton-type methods for solving large-scale optimization problems arising in m...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
International audienceProximal methods are known to identify the underlying substructure of nonsmoot...
Gradient boosting is a prediction method that iteratively combines weak learners to produce a comple...
This thesis aims at developing efficient algorithms for solving some fundamental engineering problem...
Recently several methods were proposed for sparse optimization which make careful use of second-orde...
In machine learning research, the proximal gradient methods are popular for solving various optimiza...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
International audienceWe present the first accelerated randomized algorithm for solving linear syste...
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full ...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorith...
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
A new family of numerically efficient variable metric or quasi-Newton methods for unconstrained opti...
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities...
We consider projected Newton-type methods for solving large-scale optimization problems arising in m...
International audienceWe propose an inexact variable-metric proximal point algorithm to accelerate g...
International audienceProximal methods are known to identify the underlying substructure of nonsmoot...
Gradient boosting is a prediction method that iteratively combines weak learners to produce a comple...
This thesis aims at developing efficient algorithms for solving some fundamental engineering problem...
Recently several methods were proposed for sparse optimization which make careful use of second-orde...
In machine learning research, the proximal gradient methods are popular for solving various optimiza...
Optimization is an important discipline of applied mathematics with far-reaching applications. Optim...
International audienceWe present the first accelerated randomized algorithm for solving linear syste...
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full ...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorith...
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
A new family of numerically efficient variable metric or quasi-Newton methods for unconstrained opti...
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities...
We consider projected Newton-type methods for solving large-scale optimization problems arising in m...