Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-shot-setting tasks. However, studies have shown the lack of robustness still exists in prompt learning, since suitable initialization of continuous prompt and expert-first manual prompt are essential in fine-tuning process. What is more, human also utilize their comparative ability to motivate their existing knowledge for distinguishing different examples. Motivated by this, we explore how to use contrastive samples to strengthen prompt learning. In detail, we first propose our model ConsPrompt combining with prompt encoding network, contrastive sampling module, and contrastive scoring module. Subsequently, two sampling strategies, similarity...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structure...
A primary trait of humans is the ability to learn rich representations and relationships between ent...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, espec...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
In recent years, there has been significant progress in developing pre-trained language models for N...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Few-shot learning (FSL) aims to recognize target classes by adapting the prior knowledge learned fro...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unsee...
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cl...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
A two-stage training paradigm consisting of sequential pre-training and meta-training stages has bee...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structure...
A primary trait of humans is the ability to learn rich representations and relationships between ent...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, espec...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
In recent years, there has been significant progress in developing pre-trained language models for N...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Few-shot learning (FSL) aims to recognize target classes by adapting the prior knowledge learned fro...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unsee...
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cl...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
A two-stage training paradigm consisting of sequential pre-training and meta-training stages has bee...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structure...
A primary trait of humans is the ability to learn rich representations and relationships between ent...