We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between √ T and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
We present a unified, black-box-style method for developing and analyzing online convex optimization...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., choos...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
We introduce an online convex optimization algorithm which utilizes projected subgradient descent wi...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., choos...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
We introduce an online convex optimization algorithm which utilizes projected subgradient descent wi...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...