We describe a primal-dual framework for the design and analysis of online strongly convex optimization algorithms. Our framework yields the tightest known logarithmic regret bounds for Follow-The-Leader and for the gradient de-scent algorithm proposed in Hazan et al. [2006]. We then show that one can inter-polate between these two extreme cases. In particular, we derive a new algorithm that shares the computational simplicity of gradient descent but achieves lower regret in many practical situations. Finally, we further extend our framework for generalized strongly convex functions.
We consider algorithms for 'smoothed online convex optimization' (SOCO) problems, which are a hybrid...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
International audienceWe study a class of online convex optimization problems with long-term budget ...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., choos...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
This paper treats the task of designing optimization algorithms as an optimal control problem. Using...
We consider algorithms for 'smoothed online convex optimization' (SOCO) problems, which are a hybrid...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
International audienceWe study a class of online convex optimization problems with long-term budget ...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., choos...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
This paper treats the task of designing optimization algorithms as an optimal control problem. Using...
We consider algorithms for 'smoothed online convex optimization' (SOCO) problems, which are a hybrid...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
International audienceWe study a class of online convex optimization problems with long-term budget ...