We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the frame-work and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad...
We consider convex-concave saddle point problems with a separable structure and non-strongly convex ...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
We introduce a proximal version of the stochas-tic dual coordinate ascent method and show how to acc...
We introduce a proximal version of dual coordinate ascent method. We demonstrate how the derived alg...
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss mini...
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine lear...
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss min-...
© 2016 IEEE. We present a sublinear version of the dual coordinate ascent method for solving a group...
The full version of this paper can be found on https://arxiv.org/abs/1502.02268This journal vol. ent...
Abstract We propose a new stochastic dual coordinate ascent technique that can be applied to a wide ...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...
This paper introduces AdaSDCA: an adap-tive variant of stochastic dual coordinate as-cent (SDCA) for...
We propose a new stochastic dual coordinate as-cent technique that can be applied to a wide range of...
We consider the problem of minimizing the sum of two convex functions: one is smooth and given by a ...
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad...
We consider convex-concave saddle point problems with a separable structure and non-strongly convex ...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
We introduce a proximal version of the stochas-tic dual coordinate ascent method and show how to acc...
We introduce a proximal version of dual coordinate ascent method. We demonstrate how the derived alg...
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss mini...
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine lear...
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss min-...
© 2016 IEEE. We present a sublinear version of the dual coordinate ascent method for solving a group...
The full version of this paper can be found on https://arxiv.org/abs/1502.02268This journal vol. ent...
Abstract We propose a new stochastic dual coordinate ascent technique that can be applied to a wide ...
We present and study a distributed optimization algorithm by employing a stochas-tic dual coordinate...
This paper introduces AdaSDCA: an adap-tive variant of stochastic dual coordinate as-cent (SDCA) for...
We propose a new stochastic dual coordinate as-cent technique that can be applied to a wide range of...
We consider the problem of minimizing the sum of two convex functions: one is smooth and given by a ...
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad...
We consider convex-concave saddle point problems with a separable structure and non-strongly convex ...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...