Models should have the ability to adapt to unseen data during test-time to avoid performance drop caused by inevitable distribution shifts in real-world deployment scenarios. In this work, we tackle the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data. We propose a simple recipe called data-efficient prompt tuning (DePT) with two key ingredients. First, DePT plugs visual prompts into the vision Transformer and only tunes these source-initialized prompts during adaptation. We find such parameter-efficient finetuning can efficiently adapt the model representation to the target domain without overfitting to the noise in the learning objective. Second, DePT...
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of model...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Most existing methods for multi-source unsupervised domain adaptation (UDA) rely on a common feature...
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabel...
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testin...
Experiencing domain shifts during test-time is nearly inevitable in practice and likely results in a...
Test-Time Adaptation aims to adapt source domain model to testing data at inference stage with succe...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
Adapting trained classifiers using only online test data is important since it is difficult to acces...
Despite recent advancements in deep learning, deep neural networks continue to suffer from performan...
Deep Learning models have shown remarkable performance in a broad range of vision tasks. However, th...
The performance of a machine learning model degrades when it is applied to data from a similar but d...
Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual ...
Fully Test-Time Adaptation (TTA), which aims at adapting models to data drifts, has recently attract...
Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data...
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of model...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Most existing methods for multi-source unsupervised domain adaptation (UDA) rely on a common feature...
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabel...
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testin...
Experiencing domain shifts during test-time is nearly inevitable in practice and likely results in a...
Test-Time Adaptation aims to adapt source domain model to testing data at inference stage with succe...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
Adapting trained classifiers using only online test data is important since it is difficult to acces...
Despite recent advancements in deep learning, deep neural networks continue to suffer from performan...
Deep Learning models have shown remarkable performance in a broad range of vision tasks. However, th...
The performance of a machine learning model degrades when it is applied to data from a similar but d...
Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual ...
Fully Test-Time Adaptation (TTA), which aims at adapting models to data drifts, has recently attract...
Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data...
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of model...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Most existing methods for multi-source unsupervised domain adaptation (UDA) rely on a common feature...