We propose two novel conditional gradient-based methods for solving structured stochastic convex optimization problems with a large number of linear constraints. Instances of this template naturally arise from SDP-relaxations of combinatorial problems, which involve a number of constraints that is polynomial in the problem dimension. The most important feature of our framework is that only a subset of the constraints is processed at each iteration, thus gaining a computational advantage over prior works that require full passes. Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees. Preliminary numerical experiments are provided for...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problem...
We propose a stochastic gradient framework for solving stochastic composite convex optimization prob...
A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), min...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objective...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finitesum objectives...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objective...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Convergence of a projected stochastic gradient algorithm is demonstrated for convex objective functi...
International audience<p>We propose a conditional gradient framework for a composite convex minimiza...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problem...
We propose a stochastic gradient framework for solving stochastic composite convex optimization prob...
A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), min...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objective...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finitesum objectives...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objective...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Convergence of a projected stochastic gradient algorithm is demonstrated for convex objective functi...
International audience<p>We propose a conditional gradient framework for a composite convex minimiza...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problem...