Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to a downstream application. However, it uses the same dataset-level tuned prompt for all examples in the dataset. We extend this idea and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information, combining the benefits of prompt tuning and controlled generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation ...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Pretrained Transformer-based language models (LMs) display remarkable natural language generation ca...
The dominant approaches for controlling language models achieve prominence in controlling high-level...
Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, etc.) have optimized condition...
Controllable text generation systems often leverage control codes to direct various properties of th...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Controlling neural network-based models for natural language generation (NLG) to realize desirable a...
Controllable text generation has taken a gigantic step forward these days. Yet existing methods are ...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
To learn text understanding models with millions of parameters one needs massive amounts of data. In...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Pretrained Transformer-based language models (LMs) display remarkable natural language generation ca...
The dominant approaches for controlling language models achieve prominence in controlling high-level...
Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, etc.) have optimized condition...
Controllable text generation systems often leverage control codes to direct various properties of th...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Controlling neural network-based models for natural language generation (NLG) to realize desirable a...
Controllable text generation has taken a gigantic step forward these days. Yet existing methods are ...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
To learn text understanding models with millions of parameters one needs massive amounts of data. In...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Pretrained Transformer-based language models (LMs) display remarkable natural language generation ca...
The dominant approaches for controlling language models achieve prominence in controlling high-level...