AbstractWe consider the complexity of learning classes of smooth functions formed by bounding different norms of a function's derivative. The learning model is the generalization of the mistake-bound model to continuous-valued functions. Suppose Fq is the set of all absolutely continuous functions f from [0,1] to R such that ||f′||q⩽1, and opt(Fq,m) is the best possible bound on the worst-case sum of absolute prediction errors over sequences of m trials. We show that for all q⩾2,opt(Fq,m)=Θ(logm), and that opt(F2,m)⩽(log2m)/2+O(1), matching a known lower bound of (log2m)/2−O(1) to within an additive constant
We study how certain smoothness constraints, for example, piecewise continuity, can be generalized f...
The majority of results in computational learning theory are concerned with concept learning, i.e. w...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
AbstractWe consider the complexity of learning classes of smooth functions formed by bounding differ...
We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This i...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
AbstractWe consider a generalization of the mistake-bound model (for learning {0, 1}-valued function...
AbstractWe show that the class FBV of [0,1]-valued functions with total variation at most 1 can be a...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
Much of modern learning theory has been split between two regimes: the classical offline setting, wh...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
AbstractWe present a new general-purpose algorithm for learning classes of [0, 1]-valued functions i...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
We study how certain smoothness constraints, for example, piecewise continuity, can be generalized f...
The majority of results in computational learning theory are concerned with concept learning, i.e. w...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
AbstractWe consider the complexity of learning classes of smooth functions formed by bounding differ...
We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This i...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
AbstractWe consider a generalization of the mistake-bound model (for learning {0, 1}-valued function...
AbstractWe show that the class FBV of [0,1]-valued functions with total variation at most 1 can be a...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
Much of modern learning theory has been split between two regimes: the classical offline setting, wh...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
AbstractWe present a new general-purpose algorithm for learning classes of [0, 1]-valued functions i...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
We study how certain smoothness constraints, for example, piecewise continuity, can be generalized f...
The majority of results in computational learning theory are concerned with concept learning, i.e. w...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...