In most on-line learning research the total on-line loss of the algorithm is compared to the total loss of the best o-line predictor u from a comparison class of predictors. We call such bounds static bounds. The interesting feature of these bounds is that they hold for an arbitrary sequence of examples. Recently some work has been done where the predictor u t at each trial t is allowed to change with time, and the total on-line loss of the algorithm is compared to the sum of the losses of u t at each trial plus the total \cost " for shifting to successive predictors. This is to model situations in which the examples change over time, and dierent predictors from the comparison class are best for dierent segments of the sequence of exam...
AbstractConcept drift means that the concept about which data is obtained may shift from time to tim...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We consider the problem of sequential decision making under uncertainty in which the loss caused by ...
Abstract. Foster and Vovk proved relative loss bounds for linear regression where the total loss of ...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
Abstract. We study online learning algorithms that predict by com-bining the predictions of several ...
We generalize the recent worst-case loss bounds for on-line algorithms where the additional loss of ...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
In this paper, we examine on-line learning problems in which the target concept is allowed to change...
Plan for today: Last time we looked at the Winnow algorithm, which has a very nice mistake-bound for...
We consider the problem of online prediction in changing environments. In this framework the perform...
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal i...
Shifting bounds for on-line classification algorithms ensure good performance on any sequence of exa...
A burgeoning paradigm in algorithm design is the field of algorithms with predictions, in which algo...
This thesis is devoted to on-line learning. An on-line learning algorithm receives elements of a seq...
AbstractConcept drift means that the concept about which data is obtained may shift from time to tim...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We consider the problem of sequential decision making under uncertainty in which the loss caused by ...
Abstract. Foster and Vovk proved relative loss bounds for linear regression where the total loss of ...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
Abstract. We study online learning algorithms that predict by com-bining the predictions of several ...
We generalize the recent worst-case loss bounds for on-line algorithms where the additional loss of ...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
In this paper, we examine on-line learning problems in which the target concept is allowed to change...
Plan for today: Last time we looked at the Winnow algorithm, which has a very nice mistake-bound for...
We consider the problem of online prediction in changing environments. In this framework the perform...
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal i...
Shifting bounds for on-line classification algorithms ensure good performance on any sequence of exa...
A burgeoning paradigm in algorithm design is the field of algorithms with predictions, in which algo...
This thesis is devoted to on-line learning. An on-line learning algorithm receives elements of a seq...
AbstractConcept drift means that the concept about which data is obtained may shift from time to tim...
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In...
We consider the problem of sequential decision making under uncertainty in which the loss caused by ...