Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the language model is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
This paper explores the effectiveness of prompt programming in the fine-tuning process of a Hungaria...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Large-scale pre-trained language models have contributed significantly to natural language processin...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-sc...
Masked language models conventionally use a masking rate of 15% due to the belief that more masking ...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Recent studies show that prompt tuning can better leverage the power of large language models than f...
We investigate the efficacy of visual prompting to adapt large-scale models in vision. Following the...
Several works have proven that finetuning is an applicable approach for debiasing contextualized wor...
Large pre-trained vision-language models like CLIP have shown great potential in learning representa...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
This paper explores the effectiveness of prompt programming in the fine-tuning process of a Hungaria...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Large-scale pre-trained language models have contributed significantly to natural language processin...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-sc...
Masked language models conventionally use a masking rate of 15% due to the belief that more masking ...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Recent studies show that prompt tuning can better leverage the power of large language models than f...
We investigate the efficacy of visual prompting to adapt large-scale models in vision. Following the...
Several works have proven that finetuning is an applicable approach for debiasing contextualized wor...
Large pre-trained vision-language models like CLIP have shown great potential in learning representa...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...