In real-world applications, deep learning models often run in non-stationary environments where the target data distribution continually shifts over time. There have been numerous domain adaptation (DA) methods in both online and offline modes to improve cross-domain adaptation ability. However, these DA methods typically only provide good performance after a long period of adaptation, and perform poorly on new domains before and during adaptation - in what we call the "Unfamiliar Period", especially when domain shifts happen suddenly and significantly. On the other hand, domain generalization (DG) methods have been proposed to improve the model generalization ability on unadapted domains. However, existing DG works are ineffective for cont...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Domain adaptation (DA) strives to mitigate the domain gap between the source domain where a model is...
The distribution shifts between training and test data typically undermine the performance of deep l...
Deep neural networks suffer from significant performance deterioration when there exists distributio...
Machine learning models that can generalize to unseen domains are essential when applied in real-wor...
In this work, we investigate the unexplored intersection of domain generalization and data-free lear...
Limited transferability hinders the performance of deep learning models when applied to new applicat...
Machine learning systems generally assume that the training and testing distributions are the same. ...
Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to...
Deep learning has achieved great success in the past few years. However, the performance of deep lea...
Due to domain shift, deep neural networks (DNNs) usually fail to generalize well on unknown test dat...
Domain adaptation allows machine learning models to perform well in a domain that is different from ...
In the context of supervised statistical learning, it is typically assumed that the training set com...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Domain adaptation (DA) strives to mitigate the domain gap between the source domain where a model is...
The distribution shifts between training and test data typically undermine the performance of deep l...
Deep neural networks suffer from significant performance deterioration when there exists distributio...
Machine learning models that can generalize to unseen domains are essential when applied in real-wor...
In this work, we investigate the unexplored intersection of domain generalization and data-free lear...
Limited transferability hinders the performance of deep learning models when applied to new applicat...
Machine learning systems generally assume that the training and testing distributions are the same. ...
Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to...
Deep learning has achieved great success in the past few years. However, the performance of deep lea...
Due to domain shift, deep neural networks (DNNs) usually fail to generalize well on unknown test dat...
Domain adaptation allows machine learning models to perform well in a domain that is different from ...
In the context of supervised statistical learning, it is typically assumed that the training set com...
Despite being very powerful in standard learning settings, deep learning models can be extremely bri...
Domain adaptation (DA) strives to mitigate the domain gap between the source domain where a model is...
The distribution shifts between training and test data typically undermine the performance of deep l...