In the theory of supervised learning, the identical assumption, i.e. the training and test samples are drawn from the same probability distribution, plays a crucial role. Unfortunately, this essential assumption is often violated in the presence of selection bias. Under such condition, the standard supervised learning frameworks may suffer a significant bias. In this thesis, we address the problem of selection bias in supervised learning using the importance weighting method. We first introduce the supervised learning frameworks and discuss the importance of the identical assumption. We then study the importance weighting framework for generative and discriminative learning under a general selection scheme and investigate the potential of ...
The problem of imbalanced datasets in supervised learning has emerged relatively recently, since the...
La problématique des jeux de données déséquilibrées en apprentissage supervisé est apparue relativem...
Covariate shift correction allows one to perform supervised learning even when the distribution of t...
Dans la théorie de l'apprentissage supervisé, l'hypothèse selon laquelle l'échantillon de d'apprenti...
International audienceWe present a practical bias correction method for classifier and regression mo...
International audienceCovariate shift is a specific class of selection bias that arises when the mar...
One of the fundamental assumptions behind many supervised machine learning al-gorithms is that train...
This is the second episode of the Bayesian saga started with the tutorial on the Bayesian probabilit...
Covariate-shift generalization, a typical case in out-of-distribution (OOD) generalization, requires...
In Machine Learning (ML), the x-covariate and its y-label may have di erent joint probability distr...
In standard supervised learning algorithms training and test data are assumed to fol-low the same pr...
In supervised machine learning, model performance can decrease significantly when the distribution g...
The goal of binary classification is to identify whether an input sample belongs to positive or nega...
A common assumption in supervised learning is that the training and test input points follow the sam...
Importance weighting is a class of domain adaptation techniques for machine learning, which aims to ...
The problem of imbalanced datasets in supervised learning has emerged relatively recently, since the...
La problématique des jeux de données déséquilibrées en apprentissage supervisé est apparue relativem...
Covariate shift correction allows one to perform supervised learning even when the distribution of t...
Dans la théorie de l'apprentissage supervisé, l'hypothèse selon laquelle l'échantillon de d'apprenti...
International audienceWe present a practical bias correction method for classifier and regression mo...
International audienceCovariate shift is a specific class of selection bias that arises when the mar...
One of the fundamental assumptions behind many supervised machine learning al-gorithms is that train...
This is the second episode of the Bayesian saga started with the tutorial on the Bayesian probabilit...
Covariate-shift generalization, a typical case in out-of-distribution (OOD) generalization, requires...
In Machine Learning (ML), the x-covariate and its y-label may have di erent joint probability distr...
In standard supervised learning algorithms training and test data are assumed to fol-low the same pr...
In supervised machine learning, model performance can decrease significantly when the distribution g...
The goal of binary classification is to identify whether an input sample belongs to positive or nega...
A common assumption in supervised learning is that the training and test input points follow the sam...
Importance weighting is a class of domain adaptation techniques for machine learning, which aims to ...
The problem of imbalanced datasets in supervised learning has emerged relatively recently, since the...
La problématique des jeux de données déséquilibrées en apprentissage supervisé est apparue relativem...
Covariate shift correction allows one to perform supervised learning even when the distribution of t...