Abstract. We present a new analysis of the problem of learning with drifting distributions in the batch setting using the notion of discrepancy. We prove learn-ing bounds based on the Rademacher complexity of the hypothesis set and the discrepancy of distributions both for a drifting PAC scenario and a tracking sce-nario. Our bounds are always tighter and in some cases substantially improve upon previous ones based on the L1 distance. We also present a generalization of the standard on-line to batch conversion to the drifting scenario in terms of the discrepancy and arbitrary convex combinations of hypotheses. We introduce a new algorithm exploiting these learning guarantees, which we show can be for-mulated as a simple QP. Finally, we repo...
Recently there has been much work on selec-tive sampling, an online active learning setting, in whic...
We introduce a modeling framework for the investigation of on-line machine learning processes in non...
Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. T...
AbstractWe consider two models of on-line learning of binary-valued functions from drifting distribu...
We study the problem of active learning in a stream-based setting, allowing the distribution of the ...
Most of the work in machine learning assume that examples are generated at random according to some ...
AbstractConcept drift means that the concept about which data is obtained may shift from time to tim...
Hinder F, Vaquet V, Brinkrolf J, Hammer B. On the Change of Decision Boundary and Loss in Learning w...
In this paper, we consider the problem of learning a subset of a domain from randomly chosen example...
. In this paper we study learning algorithms for environments which are changing over time. Unlike m...
In this work we study the learnability of stochastic processes with respect to the conditional risk,...
When learning from streaming data, a change in the data distribution, also known as concept drift, c...
In this dissertation, we consider techniques to improve the performance and applicability of algorit...
Proceedings of the Annual ACM Conference on Computational Learning Theory116-12521
We provide a general mechanism to design online learning algorithms based on a minimax analysis with...
Recently there has been much work on selec-tive sampling, an online active learning setting, in whic...
We introduce a modeling framework for the investigation of on-line machine learning processes in non...
Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. T...
AbstractWe consider two models of on-line learning of binary-valued functions from drifting distribu...
We study the problem of active learning in a stream-based setting, allowing the distribution of the ...
Most of the work in machine learning assume that examples are generated at random according to some ...
AbstractConcept drift means that the concept about which data is obtained may shift from time to tim...
Hinder F, Vaquet V, Brinkrolf J, Hammer B. On the Change of Decision Boundary and Loss in Learning w...
In this paper, we consider the problem of learning a subset of a domain from randomly chosen example...
. In this paper we study learning algorithms for environments which are changing over time. Unlike m...
In this work we study the learnability of stochastic processes with respect to the conditional risk,...
When learning from streaming data, a change in the data distribution, also known as concept drift, c...
In this dissertation, we consider techniques to improve the performance and applicability of algorit...
Proceedings of the Annual ACM Conference on Computational Learning Theory116-12521
We provide a general mechanism to design online learning algorithms based on a minimax analysis with...
Recently there has been much work on selec-tive sampling, an online active learning setting, in whic...
We introduce a modeling framework for the investigation of on-line machine learning processes in non...
Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. T...