Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios. Recent works have focused on automatically searching discrete or continuous prompts or optimized verbalizers, yet studies for the demonstration are still limited. Concretely, the demonstration examples are crucial for an excellent final performance of prompt-tuning. In this paper, we propose a novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling. Furthermore, the proposed approach can be: (i) Plugged into any previous prompt-tuning approaches; (ii) Extended to widespread classification tasks with a large number of categories. Experiment...
While GPTs with traditional fine-tuning fail to achieve strong results on natural language understan...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
In recent years, there has been significant progress in developing pre-trained language models for N...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
As the representation capability of Pre-trained Language Models (PLMs) improve, there is growing con...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
In the context of continual learning, prototypes-as representative class embeddings-offer advantages...
Pre-trained Language Models are widely used in many important real-world applications. However, rece...
Contrastive learning has become a new paradigm for unsupervised sentence embeddings. Previous studie...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Likelihood, although useful as a training loss, is a poor search objective for guiding open-ended ge...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
While GPTs with traditional fine-tuning fail to achieve strong results on natural language understan...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
In recent years, there has been significant progress in developing pre-trained language models for N...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
As the representation capability of Pre-trained Language Models (PLMs) improve, there is growing con...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
In the context of continual learning, prototypes-as representative class embeddings-offer advantages...
Pre-trained Language Models are widely used in many important real-world applications. However, rece...
Contrastive learning has become a new paradigm for unsupervised sentence embeddings. Previous studie...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Likelihood, although useful as a training loss, is a poor search objective for guiding open-ended ge...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
While GPTs with traditional fine-tuning fail to achieve strong results on natural language understan...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
We explore the idea of compressing the prompts used to condition language models, and show that comp...