One of the fundamental assumptions behind many supervised machine learning al-gorithms is that training and test data follow the same probability distribution. However, this important assumption is often violated in practice, for example, be-cause of an unavoidable sample selection bias or non-stationarity of the environ-ment. Due to violation of the assumption, standard machine learning methods suffer a significant estimation bias. In this article, we consider two scenarios of such distribution change — the covariate shift where input distributions differ and class-balance change where class-prior probabilities vary in classification — and review semi-supervised adaptation techniques based on importance weighting
The aim of domain adaptation algorithms is to establish a learner, trained on labeled data from a so...
Assume we are given sets of observations of training and test data, where (unlike in the classical s...
This paper promotes a new task for supervised machine learning research: quantification—the pursuit ...
In standard supervised learning algorithms training and test data are assumed to fol-low the same pr...
Supervised learning in machine learning concerns inferring an underlying relation between covariate ...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
The goal of binary classification is to identify whether an input sample belongs to positive or nega...
In the theory of supervised learning, the identical assumption, i.e. the training and test samples a...
International audienceCovariate shift is a specific class of selection bias that arises when the mar...
In supervised machine learning, model performance can decrease significantly when the distribution g...
Importance weighting is a class of domain adaptation techniques for machine learning, which aims to ...
A common use case of machine learning in real world settings is to learn a model from historical dat...
We consider the problem of function estimation in the case where the data distribution may shift bet...
Covariate shift is a situation in supervised learning where training and test inputs follow differen...
Most machine learning methods assume that the input data distribution is the same in the training an...
The aim of domain adaptation algorithms is to establish a learner, trained on labeled data from a so...
Assume we are given sets of observations of training and test data, where (unlike in the classical s...
This paper promotes a new task for supervised machine learning research: quantification—the pursuit ...
In standard supervised learning algorithms training and test data are assumed to fol-low the same pr...
Supervised learning in machine learning concerns inferring an underlying relation between covariate ...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
The goal of binary classification is to identify whether an input sample belongs to positive or nega...
In the theory of supervised learning, the identical assumption, i.e. the training and test samples a...
International audienceCovariate shift is a specific class of selection bias that arises when the mar...
In supervised machine learning, model performance can decrease significantly when the distribution g...
Importance weighting is a class of domain adaptation techniques for machine learning, which aims to ...
A common use case of machine learning in real world settings is to learn a model from historical dat...
We consider the problem of function estimation in the case where the data distribution may shift bet...
Covariate shift is a situation in supervised learning where training and test inputs follow differen...
Most machine learning methods assume that the input data distribution is the same in the training an...
The aim of domain adaptation algorithms is to establish a learner, trained on labeled data from a so...
Assume we are given sets of observations of training and test data, where (unlike in the classical s...
This paper promotes a new task for supervised machine learning research: quantification—the pursuit ...