Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge. This motivates us to check the hypothesis that prompt-tuning is also a promising choice for long-tailed classification, since the tail classes are intuitively few-shot ones. To achieve this aim, we conduct empirical studies to examine the hypothesis. The results demonstrate that prompt-tuning makes pretrained language models at least good long-tailed learners. For intuitions on why prompt-tuning can achieve good performance in long-tailed classification, we carry out in-depth analyses by progressively bridging the gap between prompt-tuning and commonly used finetuning. The summary is that the ...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In recent years, there has been significant progress in developing pre-trained language models for N...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
Recent prompt-based approaches allow pretrained language models to achieve strong performances on fe...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapo...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In recent years, there has been significant progress in developing pre-trained language models for N...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
Recent prompt-based approaches allow pretrained language models to achieve strong performances on fe...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapo...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In recent years, there has been significant progress in developing pre-trained language models for N...