Text classification is one of the most imperative tasks in natural language processing (NLP). Recent advances with pre-trained language models (PLMs) have shown remarkable success on this task. However, the satisfying results obtained by PLMs heavily depend on the large amounts of task-specific labeled data, which may not be feasible in many application scenarios due to data access and privacy constraints. The recently-proposed prompt-based fine-tuning paradigm improves the performance of PLMs for few-shot text classification with task-specific templates. Yet, it is unclear how the prompting knowledge can be transferred across tasks, for the purpose of mutual reinforcement. We propose TransPrompt v2, a novel transferable prompting framework...
Structure prediction (SP) tasks are important in natural language understanding in the sense that th...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Multi-Task Learning (MTL) models have shown their robustness, effectiveness, and efficiency for tran...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
Pre-trained multilingual language models show significant performance gains for zero-shot cross-ling...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Structure prediction (SP) tasks are important in natural language understanding in the sense that th...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Multi-Task Learning (MTL) models have shown their robustness, effectiveness, and efficiency for tran...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
Pre-trained multilingual language models show significant performance gains for zero-shot cross-ling...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning ...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Structure prediction (SP) tasks are important in natural language understanding in the sense that th...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Multi-Task Learning (MTL) models have shown their robustness, effectiveness, and efficiency for tran...