Since the emergence of large language models, prompt learning has become a popular method for optimizing and customizing these models. Special prompts, such as Chain-of-Thought, have even revealed previously unknown reasoning capabilities within these models. However, the progress of discovering effective prompts has been slow, driving a desire for general prompt optimization methods. Unfortunately, few existing prompt learning methods satisfy the criteria of being truly "general", i.e., automatic, discrete, black-box, gradient-free, and interpretable all at once. In this paper, we introduce metaheuristics, a branch of discrete non-convex optimization methods with over 100 options, as a promising approach to prompt learning. Within our para...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Black-Box Tuning (BBT) is a derivative-free approach to optimize continuous prompt tokens prepended ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Natural language prompts have been shown to facilitate cross-task generalization for large language ...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
When primed with only a handful of training samples, very large, pretrained language models such as ...
Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream ta...
Enhancing the zero-shot performance of instruction-following models requires heavy computation, eith...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Black-Box Tuning (BBT) is a derivative-free approach to optimize continuous prompt tokens prepended ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Natural language prompts have been shown to facilitate cross-task generalization for large language ...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
When primed with only a handful of training samples, very large, pretrained language models such as ...
Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream ta...
Enhancing the zero-shot performance of instruction-following models requires heavy computation, eith...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Black-Box Tuning (BBT) is a derivative-free approach to optimize continuous prompt tokens prepended ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...