We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) that scales to modern computer vision benchmarks. DRL can be naturally formulated as a competitive two-player game between a predictor and an adversary that is allowed to corrupt the labels, subject to certain constraints, and reduces to incorporating a density ratio between the source and target domains (under the standard log loss). This formulation motivates the use of two neural networks that are jointly trained - a discriminative network between the source and target domains for density-ratio estimation, in addition to the standard classification network. The use of a density ratio in DRL prevents the model from being overconfident on ta...
In the context of supervised statistical learning, it is typically assumed that the training set com...
The waive of labels in the target domain makes Unsupervised Domain Adaptation (UDA) an attractive te...
Learning in uncertain, noisy, or adversarial environments is a challenging task for deep neural netw...
We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) ...
Real world uses of deep learning require predictable model behavior under distribution shifts. Model...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
Designing robust models is critical for reliable deployment of artificial intelligence systems. Deep...
International audienceTo cope with machine learning problems where the learner receives data from di...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain ...
Lack of labelled data in the target domain for training is a common problem in domain adaptation. To...
We propose a generic distributed learning framework for robust statistical learn-ing on big contamin...
Unsupervised domain adaptation aims to generalize the supervised model trained on a source domain to...
Given multiple labeled source domains and a single target domain, most existing multi-source domain ...
In the context of supervised statistical learning, it is typically assumed that the training set com...
The waive of labels in the target domain makes Unsupervised Domain Adaptation (UDA) an attractive te...
Learning in uncertain, noisy, or adversarial environments is a challenging task for deep neural netw...
We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) ...
Real world uses of deep learning require predictable model behavior under distribution shifts. Model...
The performance decay experienced by deep neural networks (DNNs) when confronted with distributional...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
Designing robust models is critical for reliable deployment of artificial intelligence systems. Deep...
International audienceTo cope with machine learning problems where the learner receives data from di...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain ...
Lack of labelled data in the target domain for training is a common problem in domain adaptation. To...
We propose a generic distributed learning framework for robust statistical learn-ing on big contamin...
Unsupervised domain adaptation aims to generalize the supervised model trained on a source domain to...
Given multiple labeled source domains and a single target domain, most existing multi-source domain ...
In the context of supervised statistical learning, it is typically assumed that the training set com...
The waive of labels in the target domain makes Unsupervised Domain Adaptation (UDA) an attractive te...
Learning in uncertain, noisy, or adversarial environments is a challenging task for deep neural netw...