In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner. This problem arises in several applications where data distributions fluctuate because of environmental, technical, or other sources of variation. We introduce a formal framework for DG, and argue that it can be viewed as a kind of supervised learning problem by augmenting the original feature space with the marginal distribution of feature vectors. While our framework has several connections to conventional analysis of supervised learning algorithms, several unique aspects of DG require new methods of analysi...
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary ...
Discriminative learning methods for classification perform well when training and test data are draw...
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) dat...
Machine learning systems generally assume that the training and testing distributions are the same. ...
Transfer learning, or domain adaptation, is concerned with machine learning problems in which traini...
Domain generalization aims to apply knowledge gained from multiple labeled source domains to unseen ...
The phenomenon of data distribution evolving over time has been observed in a range of applications,...
Model generalization under distributional changes remains a significant challenge for machine learni...
Despite impressive success in many tasks, deep learning models are shown to rely on spurious feature...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
\u3cp\u3eDomain adaptation is the supervised learning setting in which the training and test data ar...
Existing domain generalization aims to learn a generalizable model to perform well even on unseen do...
Many domain adaptation methods are based on learning a projection or a transformation of the source ...
Abstract. The supervised learning paradigm assumes in general that both training and test data are s...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary ...
Discriminative learning methods for classification perform well when training and test data are draw...
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) dat...
Machine learning systems generally assume that the training and testing distributions are the same. ...
Transfer learning, or domain adaptation, is concerned with machine learning problems in which traini...
Domain generalization aims to apply knowledge gained from multiple labeled source domains to unseen ...
The phenomenon of data distribution evolving over time has been observed in a range of applications,...
Model generalization under distributional changes remains a significant challenge for machine learni...
Despite impressive success in many tasks, deep learning models are shown to rely on spurious feature...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
\u3cp\u3eDomain adaptation is the supervised learning setting in which the training and test data ar...
Existing domain generalization aims to learn a generalizable model to perform well even on unseen do...
Many domain adaptation methods are based on learning a projection or a transformation of the source ...
Abstract. The supervised learning paradigm assumes in general that both training and test data are s...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary ...
Discriminative learning methods for classification perform well when training and test data are draw...
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) dat...