We study a family of first-order methods with momentum based on mirror descent for online convex optimization, which we dub online mirror descent with momentum (OMDM). Our algorithms include as special cases gradient descent and exponential weights update with momentum. We provide a new and simple analysis of momentum-based methods in a stochastic setting that yields a regret bound that decreases as momentum increases. This immediately establishes that momentum can help in the convergence of stochastic subgradient descent in convex nonsmooth optimization. We showcase the robustness of our algorithm by also providing an analysis in an adversarial setting that gives the first non-trivial regret bounds for OMDM. Our work aims to provide a bett...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Online optimization has emerged as powerful tool in large scale optimization. In this pa-per, we int...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
The article examines in some detail the convergence rate and mean-square-error performance of moment...
We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow- the-Regularized-Lea...
First-order methods are gaining substantial interest in the past two decades because of their superi...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
In Chapter I, we present the online linear optimization problem and study Mirror Descent strategies....
On présente dans le Chapitre I le problème d'online linear optimization, et on étudie les stratégies...
This dissertation presents several contributions at the interface of methods for convex optimization...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We develop a modified online mirror descent framework that is suitable for building adaptive and par...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Online optimization has emerged as powerful tool in large scale optimization. In this pa-per, we int...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
The article examines in some detail the convergence rate and mean-square-error performance of moment...
We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow- the-Regularized-Lea...
First-order methods are gaining substantial interest in the past two decades because of their superi...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
In Chapter I, we present the online linear optimization problem and study Mirror Descent strategies....
On présente dans le Chapitre I le problème d'online linear optimization, et on étudie les stratégies...
This dissertation presents several contributions at the interface of methods for convex optimization...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We develop a modified online mirror descent framework that is suitable for building adaptive and par...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Online optimization has emerged as powerful tool in large scale optimization. In this pa-per, we int...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...