Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts. Yet, PLMs are unfamiliar with prompt-style expressions during pre-training, which limits the few-shot learning performance on downstream tasks. It would be desirable if the models can acquire some prompting knowledge before adaptation to specific NLP tasks. We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models by explicitly capturing prompting semantics from non-target NLP datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks, forcing PLMs to cap...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
In recent years, there has been significant progress in developing pre-trained language models for N...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Text classification is one of the most imperative tasks in natural language processing (NLP). Recent...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leverage...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
In recent years, there has been significant progress in developing pre-trained language models for N...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Text classification is one of the most imperative tasks in natural language processing (NLP). Recent...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leverage...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
In recent years, there has been significant progress in developing pre-trained language models for N...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...