Pretrained language models have demonstrated extraordinary capabilities in language generation. However, real-world tasks often require controlling the distribution of generated text in order to mitigate bias, promote fairness, and achieve personalization. Existing techniques for controlling the distribution of generated text only work with quantified distributions, which require pre-defined categories, proportions of the distribution, or an existing corpus following the desired distributions. However, many important distributions, such as personal preferences, are unquantified. In this work, we tackle the problem of generating text following arbitrary distributions (quantified and unquantified) by proposing Nano, a few-shot human-in-the-lo...
Recently, there has been an increasing interest in models that generate natural language explanation...
A few-shot generative model should be able to generate data from a novel distribution by only observ...
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models...
Pretrained language models have demonstrated extraordinary capabilities in language generation. Howe...
The dominant approaches for controlling language models achieve prominence in controlling high-level...
While Reinforcement Learning from Human Feedback (RLHF) aligns Large Language Models (LLMs) with gen...
Business analytics and machine learning have become essential success factors for various industries...
Pretrained Transformer-based language models (LMs) display remarkable natural language generation ca...
Generative foundation models are susceptible to implicit biases that can arise from extensive unsupe...
Controllable text generation systems often leverage control codes to direct various properties of th...
Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models...
We test whether distributional models can do one-shot learning of definitional properties from text ...
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unsee...
Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-...
One of the most robust findings of studies of human-human dialogue is that people adapt their uttera...
Recently, there has been an increasing interest in models that generate natural language explanation...
A few-shot generative model should be able to generate data from a novel distribution by only observ...
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models...
Pretrained language models have demonstrated extraordinary capabilities in language generation. Howe...
The dominant approaches for controlling language models achieve prominence in controlling high-level...
While Reinforcement Learning from Human Feedback (RLHF) aligns Large Language Models (LLMs) with gen...
Business analytics and machine learning have become essential success factors for various industries...
Pretrained Transformer-based language models (LMs) display remarkable natural language generation ca...
Generative foundation models are susceptible to implicit biases that can arise from extensive unsupe...
Controllable text generation systems often leverage control codes to direct various properties of th...
Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models...
We test whether distributional models can do one-shot learning of definitional properties from text ...
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unsee...
Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-...
One of the most robust findings of studies of human-human dialogue is that people adapt their uttera...
Recently, there has been an increasing interest in models that generate natural language explanation...
A few-shot generative model should be able to generate data from a novel distribution by only observ...
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models...