Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural language understanding (NLU) tasks with task-specific fine-tuning. However, direct fine-tuning PLMs heavily relies on large amount of labeled instances, which are expensive and time-consuming to obtain. Prompt-based tuning on PLMs has proven valuable for few shot tasks. Existing works studying prompt-based tuning for few-shot NLU mainly focus on deriving proper label words with a verbalizer or generating prompt templates for eliciting semantics from PLMs. In addition, conventional data augmentation methods have also been verified useful for few-shot tasks. However, there currently are few data augmentation methods designed for the prompt-based tunin...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability...
In recent years, there has been significant progress in developing pre-trained language models for N...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostl...
Pretraining deep neural networks to perform language modeling - that is, to reconstruct missing word...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leverage...
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cl...
Prompt-based learning has shown its effectiveness in few-shot text classification. One important fac...
Prompt-based Learning has shown significant success in few-shot classification. The mainstream appro...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability...
In recent years, there has been significant progress in developing pre-trained language models for N...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostl...
Pretraining deep neural networks to perform language modeling - that is, to reconstruct missing word...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leverage...
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cl...
Prompt-based learning has shown its effectiveness in few-shot text classification. One important fac...
Prompt-based Learning has shown significant success in few-shot classification. The mainstream appro...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability...
In recent years, there has been significant progress in developing pre-trained language models for N...