Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in a data-scarce situation. Therefore, it is non-trivial to develop a general and lightweight model that can adapt to various text generation tasks based on PLMs. To fulfill this purpose, the recent prompt-based learning offers a potential solution. In this paper, we improve this technique and propose a novel prompt-based method (PTG) for text generation in a transferable setting. First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks. To consider both task- and instance-level informatio...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Large-scale pre-trained language models have contributed significantly to natural language processin...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Text classification is one of the most imperative tasks in natural language processing (NLP). Recent...
Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
The task of data-to-text generation amounts to describing structured data in fluent natural language...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Providing pretrained language models with simple task descriptions in natural language enables them ...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Large-scale pre-trained language models have contributed significantly to natural language processin...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Text classification is one of the most imperative tasks in natural language processing (NLP). Recent...
Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
The task of data-to-text generation amounts to describing structured data in fluent natural language...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Providing pretrained language models with simple task descriptions in natural language enables them ...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Large-scale pre-trained language models have contributed significantly to natural language processin...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...