We consider the online convex optimization problem. In the setting of arbitrary sequences and finite set of parameters, we establish a new fast-rate quantile regret bound. Then we investigate the optimization into the L1-ball by discretizing the parameter space. Our algorithm is projection free and we propose an efficient solution by restarting the algorithm on adaptive discretization grids. In the adversarial setting, we develop an algorithm that achieves several rates of convergence with different dependences on the sparsity of the objective. In the i.i.d. setting, we establish new risk bounds that are adaptive to the sparsity of the problem and to the regularity of the risk (ranging from a rate 1 / √ T for general convex risk to 1 /T for...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We present new efficient \textit{projection-free} algorithms for online convex optimization (OCO), w...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We consider the online convex optimization problem. In the setting of arbitrary sequences and finite...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
We consider the problem of online linear regression on arbitrary deterministic sequences when the am...
Some of the most compelling applications of online convex optimization, includ-ing online prediction...
Some of the most compelling applications of online convex optimization, includ-ing online prediction...
Some of the most compelling applications of online convex optimization, includ-ing online prediction...
The framework of online learning with memory naturally captures learning problems with temporal effe...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We consider online convex optimizations in the bandit setting. The decision maker does not know the ...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We present new efficient \textit{projection-free} algorithms for online convex optimization (OCO), w...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We consider the online convex optimization problem. In the setting of arbitrary sequences and finite...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
We consider the problem of online linear regression on arbitrary deterministic sequences when the am...
Some of the most compelling applications of online convex optimization, includ-ing online prediction...
Some of the most compelling applications of online convex optimization, includ-ing online prediction...
Some of the most compelling applications of online convex optimization, includ-ing online prediction...
The framework of online learning with memory naturally captures learning problems with temporal effe...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We consider online convex optimizations in the bandit setting. The decision maker does not know the ...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We present new efficient \textit{projection-free} algorithms for online convex optimization (OCO), w...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...