Online optimization has emerged as powerful tool in large scale optimization. In this pa-per, we introduce efficient online optimization algorithms based on the alternating direction method (ADM), which can solve online convex optimization under linear constraints where the objective could be non-smooth. We introduce new proof techniques for ADM in the batch setting, which yields a O(1/T) convergence rate for ADM and forms the basis for regret anal-ysis in the online setting. We consider two scenarios in the online setting, based on whether an additional Bregman divergence is needed or not. In both settings, we establish regret bounds for both the objective function as well as constraints violation for general and strongly convex functions....
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Being one of the most effective methods, Alternating Direction Method (ADM) has been extensively stu...
In this research we study some online learning algorithms in the online convex optimization framewor...
Online optimization has emerged as powerful tool in large scale optimization. In this paper, we intr...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
The growing prevalence of networked systems with local sensing and computational capability will res...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
We study online convex optimization with constraints consisting of multiple functional constraints a...
Recently, the online optimization attracts much attention in the big data science since it is an eff...
In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., choos...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Being one of the most effective methods, Alternating Direction Method (ADM) has been extensively stu...
In this research we study some online learning algorithms in the online convex optimization framewor...
Online optimization has emerged as powerful tool in large scale optimization. In this paper, we intr...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation ...
We describe a primal-dual framework for the design and analysis of online strongly convex optimizati...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
The growing prevalence of networked systems with local sensing and computational capability will res...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
We study online convex optimization with constraints consisting of multiple functional constraints a...
Recently, the online optimization attracts much attention in the big data science since it is an eff...
In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., choos...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Being one of the most effective methods, Alternating Direction Method (ADM) has been extensively stu...
In this research we study some online learning algorithms in the online convex optimization framewor...