Importance weighting is a class of domain adaptation techniques for machine learning, which aims to correct the discrepancy in distribution between the train and test datasets, often caused by sample selection bias. In doing so, it frequently uses unlabeled data from the test set. However, this approach has certain drawbacks: it requires retraining for each new test set and fails when the number of test samples is very small. Therefore, we seek to study the performance of importance weighting techniques when the unlabeled data comes from an underlying domain, instead of one specific test set. We propose an evaluation framework inspired from scenarios traditionally known for posing difficulties to importance weighting and apply it to two pop...
Despite the prominent use of complex survey data and the growing popularity of machine learning meth...
Machine Learning is a branch of artificial intelligence focused on building applications that learn ...
In the theory of supervised learning, the identical assumption, i.e. the training and test samples a...
Importance sampling is often used in machine learning when training and testing data come from diffe...
International audienceUnsupervised Domain Adaptation (UDA) has attracted a lot of attention the past...
Abstract. This paper presents a theoretical analysis of sample selection bias cor-rection. The sampl...
International audienceWe present a practical bias correction method for classifier and regression mo...
Importance weighting is a generalization of various statistical bias correction tech-niques. While o...
International audienceCovariate shift is a specific class of selection bias that arises when the mar...
Domain Adaptation (DA) methods are usually carried out by means of simply reducing the marginal dist...
One of the fundamental assumptions behind many supervised machine learning al-gorithms is that train...
Cross-validation under sample selection bias can, in principle, be done by importance-weighting the ...
\u3cp\u3eImportance-weighting is a popular and well-researched technique for dealing with sample sel...
Importance weighted active learning (IWAL) introduces a weight-ing scheme to measure the importance ...
This paper reviews the appropriateness for application to large data sets of standard machine learni...
Despite the prominent use of complex survey data and the growing popularity of machine learning meth...
Machine Learning is a branch of artificial intelligence focused on building applications that learn ...
In the theory of supervised learning, the identical assumption, i.e. the training and test samples a...
Importance sampling is often used in machine learning when training and testing data come from diffe...
International audienceUnsupervised Domain Adaptation (UDA) has attracted a lot of attention the past...
Abstract. This paper presents a theoretical analysis of sample selection bias cor-rection. The sampl...
International audienceWe present a practical bias correction method for classifier and regression mo...
Importance weighting is a generalization of various statistical bias correction tech-niques. While o...
International audienceCovariate shift is a specific class of selection bias that arises when the mar...
Domain Adaptation (DA) methods are usually carried out by means of simply reducing the marginal dist...
One of the fundamental assumptions behind many supervised machine learning al-gorithms is that train...
Cross-validation under sample selection bias can, in principle, be done by importance-weighting the ...
\u3cp\u3eImportance-weighting is a popular and well-researched technique for dealing with sample sel...
Importance weighted active learning (IWAL) introduces a weight-ing scheme to measure the importance ...
This paper reviews the appropriateness for application to large data sets of standard machine learni...
Despite the prominent use of complex survey data and the growing popularity of machine learning meth...
Machine Learning is a branch of artificial intelligence focused on building applications that learn ...
In the theory of supervised learning, the identical assumption, i.e. the training and test samples a...