Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template. This approach demonstrates its effectiveness, especially in few-shot learning scenarios, where the model is trained on a scarce amount of data. Despite its successes, the limited templates and text in few-shot prompt-based learning scenarios leave significant room for performance improvement. Moreover, existing methods sometimes resort to model ensembles, which, while effective, could potentially hamper model efficiency due to increased computational demands. To address these issues, we introduce MixPro, an augmentation method designed to augment both the vanilla input te...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language mode...
In recent years, there has been significant progress in developing pre-trained language models for N...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostl...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Prompt-based models have gathered a lot of attention from researchers due to their remarkable advanc...
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveragin...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language mode...
In recent years, there has been significant progress in developing pre-trained language models for N...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostl...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Prompt-based models have gathered a lot of attention from researchers due to their remarkable advanc...
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveragin...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language mode...