Pretrained Transformer-based language models (LMs) display remarkable natural language generation capabilities. With their immense potential, controlling text generation of such LMs is getting attention. While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to control an LM's output text with a content input, at a fine-grained level. In our self-supervised approach, the CoCon block learns to help the LM complete a partially-observed text sequence by conditioning with content inputs that are withheld from the LM. Through experiments, we show th...
Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-...
Existing text scaling methods often require a large corpus, struggle with short texts, or require la...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
Controllable text generation has taken a gigantic step forward these days. Yet existing methods are ...
Controllable text generation systems often leverage control codes to direct various properties of th...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
Large language models (LM) based on Transformers allow to generate plausible long texts. In this pap...
The dominant approaches for controlling language models achieve prominence in controlling high-level...
Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to...
Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, etc.) have optimized condition...
The recently introduced Controlled Text Reduction (CTR) task isolates the text generation step withi...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG...
Deep neural networks have recently achieved remarkable empirical success in text generation tasks. U...
Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-...
Existing text scaling methods often require a large corpus, struggle with short texts, or require la...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
Controllable text generation has taken a gigantic step forward these days. Yet existing methods are ...
Controllable text generation systems often leverage control codes to direct various properties of th...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
Large language models (LM) based on Transformers allow to generate plausible long texts. In this pap...
The dominant approaches for controlling language models achieve prominence in controlling high-level...
Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to...
Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, etc.) have optimized condition...
The recently introduced Controlled Text Reduction (CTR) task isolates the text generation step withi...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG...
Deep neural networks have recently achieved remarkable empirical success in text generation tasks. U...
Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-...
Existing text scaling methods often require a large corpus, struggle with short texts, or require la...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...