When the test distribution differs from the training distribution, machine learning models can perform poorly and wrongly overestimate their performance. In this work, we aim to better estimate the model’s performance under distribution shift, without supervision. To do so, we use a set of domain-invariant predictors as a proxy for the unknown, true target labels, where the error of this estimation is bounded by the target risk of the proxy model. Therefore, we study the generalization of domain-invariant representations and show that the complexity of the latent representation has a significant influence on the target risk. Empirically, our estimation approach can self-tune to find the optimal model complexity and the resulting models achi...
Generalizing models for new unknown datasets is a common problem in machine learning. Algorithms tha...
The phenomenon of data distribution evolving over time has been observed in a range of applications,...
Domain shifts in the training data are common in practical applications of machine learning; they oc...
Many domain adaptation methods are based on learning a projection or a transformation of the source ...
Domain adaptation (DA) arises as an important problem in statistical machine learning when the sourc...
Discriminative learning methods for classification perform well when training and test data are draw...
This paper studies model transferability when human decision subjects respond to a deployed machine ...
Machine learning algorithms typically assume that training and test examples are drawn from the same...
Adversarial attacks cause machine learning models to produce wrong predictions by minimally perturbi...
Abstract. The supervised learning paradigm assumes in general that both training and test data are s...
Machine learning models are typically deployed in a test setting that differs from the training sett...
Domain adaptation addresses learning tasks where training is performed on data from one domain where...
The Domain Adaptation problem in machine learning occurs when the distribution generating the test d...
Learning models that are robust to distribution shifts is a key concern in the context of their real...
International audienceUnsupervised Domain Adaptation (UDA) has attracted a lot of attention the past...
Generalizing models for new unknown datasets is a common problem in machine learning. Algorithms tha...
The phenomenon of data distribution evolving over time has been observed in a range of applications,...
Domain shifts in the training data are common in practical applications of machine learning; they oc...
Many domain adaptation methods are based on learning a projection or a transformation of the source ...
Domain adaptation (DA) arises as an important problem in statistical machine learning when the sourc...
Discriminative learning methods for classification perform well when training and test data are draw...
This paper studies model transferability when human decision subjects respond to a deployed machine ...
Machine learning algorithms typically assume that training and test examples are drawn from the same...
Adversarial attacks cause machine learning models to produce wrong predictions by minimally perturbi...
Abstract. The supervised learning paradigm assumes in general that both training and test data are s...
Machine learning models are typically deployed in a test setting that differs from the training sett...
Domain adaptation addresses learning tasks where training is performed on data from one domain where...
The Domain Adaptation problem in machine learning occurs when the distribution generating the test d...
Learning models that are robust to distribution shifts is a key concern in the context of their real...
International audienceUnsupervised Domain Adaptation (UDA) has attracted a lot of attention the past...
Generalizing models for new unknown datasets is a common problem in machine learning. Algorithms tha...
The phenomenon of data distribution evolving over time has been observed in a range of applications,...
Domain shifts in the training data are common in practical applications of machine learning; they oc...