Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining. In this work, we explore the transfer of prompt tuning to multimodal pretraining, with a focus on generative multimodal pretrained models, instead of contrastive ones. Specifically, we implement prompt tuning on the unified sequence-to-sequence pretrained model adaptive to both understanding and generation tasks. Experimental results demonstrate that the light-weight prompt tuning can achieve comparable performance with finetuning and surpass other light-weight tuning methods. Besides, in comparison with finetuned models, the prompt-tuned models demonstrate improved robustness against adversar...
This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the t...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, espec...
In recent years, prompt tuning has sparked a research surge in adapting pre-trained models. Unlike t...
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models t...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural ...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream ta...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
The advent of hyper-scale and general-purpose pre-trained models is shifting the paradigm of buildin...
This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the t...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, espec...
In recent years, prompt tuning has sparked a research surge in adapting pre-trained models. Unlike t...
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models t...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural ...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream ta...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
The advent of hyper-scale and general-purpose pre-trained models is shifting the paradigm of buildin...
This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the t...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...