We present a unified, black-box-style method for developing and analyzing online convex optimization (OCO) algorithms for full-information online learning in delayed-feedback environments. Our new, simplified analysis enables us to substantially improve upon previous work and to solve a number of open problems from the literature. Specifically, we develop and analyze asynchronous AdaGrad-style algorithms from the Follow-the-Regularized-Leader (FTRL) and MirrorDescent family that, unlike previous works, can handle projections and adapt both to the gradients and the delays, without relying on either strong convexity or smoothness of the objective function, or data sparsity. Our unified framework builds on a natural reduction from delayed-feed...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We analyze new online gradient descent algorithms for distributed systems with large delays between ...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
We present new efficient \textit{projection-free} algorithms for online convex optimization (OCO), w...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We analyze new online gradient descent algorithms for distributed systems with large delays between ...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
The framework of online learning with memory naturally captures learning problems with temporal effe...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We analyze new online gradient descent algorithms for distributed systems with large delays between ...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
We present new efficient \textit{projection-free} algorithms for online convex optimization (OCO), w...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We analyze new online gradient descent algorithms for distributed systems with large delays between ...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
The framework of online learning with memory naturally captures learning problems with temporal effe...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We analyze new online gradient descent algorithms for distributed systems with large delays between ...