Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability. Among them, additive learning that incorporates a task-specific adapter on top of the fixed large-scale PLM has been popularly used in the few-shot setting. However, this added adapter is still easy to disregard the knowledge of the PLM especially for few-shot natural language generation (NLG) since an entire sequence is usually generated by only the newly trained adapter. Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) that selectively outputs language tokens between the task-...
Natural language generation (NLG) is an important task with various applications like neural machine...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models...
Controlling the generative model to adapt a new domain with limited samples is a difficult challenge...
Recently, there has been an increasing interest in models that generate natural language explanation...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
When a natural language generation (NLG) component is implemented in a real-world task-oriented dial...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In recent years, there has been significant progress in developing pre-trained language models for N...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structure...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Natural language generation (NLG) is an important task with various applications like neural machine...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models...
Controlling the generative model to adapt a new domain with limited samples is a difficult challenge...
Recently, there has been an increasing interest in models that generate natural language explanation...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
When a natural language generation (NLG) component is implemented in a real-world task-oriented dial...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In recent years, there has been significant progress in developing pre-trained language models for N...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structure...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Natural language generation (NLG) is an important task with various applications like neural machine...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models...