Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of extracting a discrete (textual) interpretation of continuous prompts that is faithful to the problem they solve. In practice, we observe a "wayward" behavior between the task solved by continuous prompts and their nearest neighbor discrete projections: We can find continuous prompts that solve a task while being projected to an arbitrary text (e.g., definition of a different or even a contradictory task), while being within a very small (2%) margin of the best continuous prompt of the same size for the task. We provide intuitions behind this odd and s...
The prompt-based learning paradigm has gained much research attention recently. It has achieved stat...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
With the spread of the use of Text2Img diffusion models such as DALL-E 2, Imagen, Mid Journey and St...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
Prompt-based models have gathered a lot of attention from researchers due to their remarkable advanc...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Prompts have been the center of progress in advancing language models' zero-shot and few-shot perfor...
When primed with only a handful of training samples, very large, pretrained language models such as ...
We investigate the efficacy of visual prompting to adapt large-scale models in vision. Following the...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Since the emergence of large language models, prompt learning has become a popular method for optimi...
In the context of continual learning, prototypes-as representative class embeddings-offer advantages...
The prompt-based learning paradigm has gained much research attention recently. It has achieved stat...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
With the spread of the use of Text2Img diffusion models such as DALL-E 2, Imagen, Mid Journey and St...
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) ...
We explore the idea of compressing the prompts used to condition language models, and show that comp...
Prompt-based models have gathered a lot of attention from researchers due to their remarkable advanc...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Prompts have been the center of progress in advancing language models' zero-shot and few-shot perfor...
When primed with only a handful of training samples, very large, pretrained language models such as ...
We investigate the efficacy of visual prompting to adapt large-scale models in vision. Following the...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Since the emergence of large language models, prompt learning has become a popular method for optimi...
In the context of continual learning, prototypes-as representative class embeddings-offer advantages...
The prompt-based learning paradigm has gained much research attention recently. It has achieved stat...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
With the spread of the use of Text2Img diffusion models such as DALL-E 2, Imagen, Mid Journey and St...