Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ε approximate solution is proportional to 1 ε2. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1 ε iterations. The latter algorithm requires to solve a convex quadratic program every iteration- an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to 1 ε. The algorithm does not require to solve any quadratic program, but uses gradient steps and elementary operations only. ...
Thesis (Ph.D.)--University of Washington, 2017Convex optimization is more popular than ever, with ex...
In this paper we shed new light on convex programs and distributed algorithms for Fisher markets wit...
<p>The rapid growth in data availability has led to modern large scale convex optimization problems ...
This paper describes a general framework for converting online game playing algorithms into constrai...
The growing prevalence of networked systems with local sensing and computational capability will res...
This thesis deals with a class of Lagrangian relaxation based algorithms developed in the computer s...
International audienceWe discuss the possibility to accelerate solving extremely large-scale well st...
Abstract—Linear optimization is many times algorithmi-cally simpler than non-linear convex optimizat...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
2018-02-07In this thesis, we develop new Lagrangian methods with fast convergence for constrained co...
Linear programming (LP) and semidefinite programming (SDP) are among the most important tools in Ope...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Online learning and convex optimization algorithms have become essential tools for solving problems ...
In this dissertation we consider quantum algorithms for convex optimization. We start by considering...
Thesis (Ph.D.)--University of Washington, 2017Convex optimization is more popular than ever, with ex...
In this paper we shed new light on convex programs and distributed algorithms for Fisher markets wit...
<p>The rapid growth in data availability has led to modern large scale convex optimization problems ...
This paper describes a general framework for converting online game playing algorithms into constrai...
The growing prevalence of networked systems with local sensing and computational capability will res...
This thesis deals with a class of Lagrangian relaxation based algorithms developed in the computer s...
International audienceWe discuss the possibility to accelerate solving extremely large-scale well st...
Abstract—Linear optimization is many times algorithmi-cally simpler than non-linear convex optimizat...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
2018-02-07In this thesis, we develop new Lagrangian methods with fast convergence for constrained co...
Linear programming (LP) and semidefinite programming (SDP) are among the most important tools in Ope...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Online learning and convex optimization algorithms have become essential tools for solving problems ...
In this dissertation we consider quantum algorithms for convex optimization. We start by considering...
Thesis (Ph.D.)--University of Washington, 2017Convex optimization is more popular than ever, with ex...
In this paper we shed new light on convex programs and distributed algorithms for Fisher markets wit...
<p>The rapid growth in data availability has led to modern large scale convex optimization problems ...