Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream tasks. Without a good initialization, prompt tuning doesn't perform well under few-shot settings. So pre-trained prompt tuning (PPT) is proposed to initialize prompts by leveraging pre-training data. We propose MetaPT (Meta-learned Prompt Tuning) to further improve PPT's initialization by considering latent structure within the pre-training data. Specifically, we introduce the structure by first clustering pre-training data into different auxiliary tasks with unsupervised methods. Then we use these tasks to pre-train prompts with a meta-learning algorithm. Such a process can make prompts learn a better initialization by discovering commonalitie...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the t...
© 2020 IEEE.Few-shot learning is a challenging problem where the goal is to achieve generalization f...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-sc...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Recent studies show that prompt tuning can better leverage the power of large language models than f...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Building models of natural language processing (NLP) is challenging in low-resource scenarios where ...
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
Prompt-Tuning is a new paradigm for finetuning pre-trained language models in a parameter-efficient ...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the t...
© 2020 IEEE.Few-shot learning is a challenging problem where the goal is to achieve generalization f...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-sc...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Recent studies show that prompt tuning can better leverage the power of large language models than f...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Building models of natural language processing (NLP) is challenging in low-resource scenarios where ...
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
Prompt-Tuning is a new paradigm for finetuning pre-trained language models in a parameter-efficient ...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the t...
© 2020 IEEE.Few-shot learning is a challenging problem where the goal is to achieve generalization f...