In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory be-hind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Un-like SDCA, SAGA supports non-strongly convex problems directly, and is adap-tive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.
We consider first order gradient methods for effectively optimizing a composite objective in the for...
Composite convex optimization models arise in several applications, and are especially prevalent in ...
Finite-sum optimization plays an important role in the area of machine learning, and hence has trigg...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
There have been a number of recent advances in accelerated gradient and proximal schemes for optimiz...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational a...
In this paper, we propose a new algorithm to speed-up the convergence of accel-erated proximal gradi...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...
We first propose an adaptive accelerated prox-imal gradient (APG) method for minimizing strongly con...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
Various signal processing applications can be expressed as large-scale optimization problems with a ...
We survey incremental methods for minimizing a sum ∑m i=1 fi(x) consisting of a large number of conv...
Recent advances in optimization theory have shown that smooth strongly convex finite sums can be min...
We consider first order gradient methods for effectively optimizing a composite objective in the for...
Composite convex optimization models arise in several applications, and are especially prevalent in ...
Finite-sum optimization plays an important role in the area of machine learning, and hence has trigg...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
There have been a number of recent advances in accelerated gradient and proximal schemes for optimiz...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational a...
In this paper, we propose a new algorithm to speed-up the convergence of accel-erated proximal gradi...
Regularized risk minimization often involves non-smooth optimization, either because of the loss fun...
We first propose an adaptive accelerated prox-imal gradient (APG) method for minimizing strongly con...
http://jmlr.org/papers/volume18/17-748/17-748.pdfInternational audienceWe introduce a generic scheme...
Various signal processing applications can be expressed as large-scale optimization problems with a ...
We survey incremental methods for minimizing a sum ∑m i=1 fi(x) consisting of a large number of conv...
Recent advances in optimization theory have shown that smooth strongly convex finite sums can be min...
We consider first order gradient methods for effectively optimizing a composite objective in the for...
Composite convex optimization models arise in several applications, and are especially prevalent in ...
Finite-sum optimization plays an important role in the area of machine learning, and hence has trigg...