We consider Online Convex Optimization (OCO) in the setting where the costs are mm-strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio 3+O(1/m), irrespective of the ambient dimension. Additionally, we show that when the sequence of cost functions is ϵϵ-smooth, OBD has near-optimal dynamic regret and maintains strong per-round accuracy. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge ...
We consider algorithms for 'smoothed online convex optimization' (SOCO) problems, which are a hybrid...
Online Convex Optimization (OCO) is a field in the intersection of game theory, optimization, and ma...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We consider Online Convex Optimization (OCO) in the setting where the costs are m-strongly convex an...
We consider Online Convex Optimization (OCO) in the setting where the costs are mm-strongly convex a...
We study Smoothed Online Convex Optimization, a version of online convex optimization where the lear...
We consider algorithms for "smoothed online convex optimization (SOCO)" problems. SOCO is a variant ...
This paper presents competitive algorithms for a novel class of online optimization problems with me...
We study the performance of an online learner under a framework in which it receives partial informa...
We consider algorithms for “smoothed online convex optimization” (SOCO) problems, which are a hybri...
We study online optimization in a setting where an online learner seeks to optimize a per-round hitt...
We consider a natural online optimization problem set on the real line. The state of the online algo...
We examine the problem of smoothed online optimization, where a decision maker must sequentially cho...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Making use of predictions is a crucial, but under-explored, area of online algorithms. This paper st...
We consider algorithms for 'smoothed online convex optimization' (SOCO) problems, which are a hybrid...
Online Convex Optimization (OCO) is a field in the intersection of game theory, optimization, and ma...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
We consider Online Convex Optimization (OCO) in the setting where the costs are m-strongly convex an...
We consider Online Convex Optimization (OCO) in the setting where the costs are mm-strongly convex a...
We study Smoothed Online Convex Optimization, a version of online convex optimization where the lear...
We consider algorithms for "smoothed online convex optimization (SOCO)" problems. SOCO is a variant ...
This paper presents competitive algorithms for a novel class of online optimization problems with me...
We study the performance of an online learner under a framework in which it receives partial informa...
We consider algorithms for “smoothed online convex optimization” (SOCO) problems, which are a hybri...
We study online optimization in a setting where an online learner seeks to optimize a per-round hitt...
We consider a natural online optimization problem set on the real line. The state of the online algo...
We examine the problem of smoothed online optimization, where a decision maker must sequentially cho...
We aim to design universal algorithms for online convex optimization, which can handle multiple comm...
Making use of predictions is a crucial, but under-explored, area of online algorithms. This paper st...
We consider algorithms for 'smoothed online convex optimization' (SOCO) problems, which are a hybrid...
Online Convex Optimization (OCO) is a field in the intersection of game theory, optimization, and ma...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...