In recent years, there has been significant progress in developing pre-trained language models for NLP. However, these models often struggle when fine-tuned on small datasets. To address this issue, researchers have proposed various adaptation approaches. Prompt-based tuning is arguably the most common way, especially for larger models. Previous research shows that adding contrastive learning to prompt-based fine-tuning is effective as it helps the model generate embeddings that are more distinguishable between classes, and it can also be more sample-efficient as the model learns from positive and negative examples simultaneously. One of the most important components of contrastive learning is data augmentation, but unlike computer vision, ...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cl...
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, espec...
In recent years, language models (LMs) have made remarkable progress in advancing the field of natu...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Large language models (LLMs) have shown promising performance on various NLP tasks via task promptin...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspi...
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cl...
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, espec...
In recent years, language models (LMs) have made remarkable progress in advancing the field of natu...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Large language models (LLMs) have shown promising performance on various NLP tasks via task promptin...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...