International audienceIn this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational a...
There have been a number of recent advances in accelerated gradient and proximal schemes for optimiz...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
main paper (9 pages) + appendix (21 pages)International audienceWe introduce a generic scheme for ac...
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorith...
We describe Asaga, an asynchronous parallel version of the incremental gradient algorithm Saga that ...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
International audienceMajorization-minimization algorithms consist of successively minimizing a sequ...
Revision from January 2015 submission. Major changes: updated literature follow and discussion of su...
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational a...
There have been a number of recent advances in accelerated gradient and proximal schemes for optimiz...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
main paper (9 pages) + appendix (21 pages)International audienceWe introduce a generic scheme for ac...
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorith...
We describe Asaga, an asynchronous parallel version of the incremental gradient algorithm Saga that ...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
International audienceMajorization-minimization algorithms consist of successively minimizing a sequ...
Revision from January 2015 submission. Major changes: updated literature follow and discussion of su...
The notable changes over the current version: - worked example of convergence rates showing SAG can ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...