In this paper we address multi-target domain adaptation (MTDA), where given one labeled source dataset and multiple unlabeled target datasets that differ in data distributions, the task is to learn a robust predictor for all the target domains. We identify two key aspects that can help to alleviate multiple domain-shifts in the MTDA: feature aggregation and curriculum learning. To this end, we propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains. To prevent the classifiers from over-fitting on its own noisy pseudo-labels we develop a co-teaching strategy with the dual classifier head that is a...
Unsupervised domain adaptation has caught appealing attentions as it facilitates the unlabeled targe...
Deep learning architectures can achieve state-of-the-art results in several computer vision tasks. H...
Most machine learning algorithms assume that training and test data are sampled from the same distri...
For unsupervised domain adaptation, the process of learning domain-invariant representations could b...
Graph convolutional networks (GCNs) have achieved impressive success in many graph related analytics...
Traditional machine learning algorithms assume that the training and test data have the same distrib...
One of the main limitations of artificial intelligence today is its inability to adapt to unforeseen...
This paper presents a new perspective to formulate unsupervised domain adaptation as a multi-task le...
A common assumption in semi-supervised learning is that the class label function has a slow variatio...
The advent of deep convolutional networks has powered a new wave of progress in visual recognition. ...
Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples ar...
In many domain adaption formulations, it is assumed to have large amount of unlabeled data from the ...
Curriculum Learning (CL) mimics the cognitive process ofhumans and favors a learning algorithm to fo...
We propose a multiple source domain adaptation method, referred to as Domain Adaptation Machine (DAM...
Multi-Source Domain Adaptation (MSDA) focuses on transferring the knowledge from multiple source dom...
Unsupervised domain adaptation has caught appealing attentions as it facilitates the unlabeled targe...
Deep learning architectures can achieve state-of-the-art results in several computer vision tasks. H...
Most machine learning algorithms assume that training and test data are sampled from the same distri...
For unsupervised domain adaptation, the process of learning domain-invariant representations could b...
Graph convolutional networks (GCNs) have achieved impressive success in many graph related analytics...
Traditional machine learning algorithms assume that the training and test data have the same distrib...
One of the main limitations of artificial intelligence today is its inability to adapt to unforeseen...
This paper presents a new perspective to formulate unsupervised domain adaptation as a multi-task le...
A common assumption in semi-supervised learning is that the class label function has a slow variatio...
The advent of deep convolutional networks has powered a new wave of progress in visual recognition. ...
Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples ar...
In many domain adaption formulations, it is assumed to have large amount of unlabeled data from the ...
Curriculum Learning (CL) mimics the cognitive process ofhumans and favors a learning algorithm to fo...
We propose a multiple source domain adaptation method, referred to as Domain Adaptation Machine (DAM...
Multi-Source Domain Adaptation (MSDA) focuses on transferring the knowledge from multiple source dom...
Unsupervised domain adaptation has caught appealing attentions as it facilitates the unlabeled targe...
Deep learning architectures can achieve state-of-the-art results in several computer vision tasks. H...
Most machine learning algorithms assume that training and test data are sampled from the same distri...