This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization for Large Language Models (LLM). While LLMs have demonstrated remarkable ability in achieving high-quality annotation in various tasks, the key to applying this ability to specific tasks lies in developing high-quality prompts. Thus we propose a framework to inherit the merits of both in-context learning and zero-shot learning by incorporating enriched instructions derived from input-output demonstrations to optimize original prompt. We refer to the enrichment as the hint and propose a framework to automatically generate the hint from labeled data. More concretely, starting from an initial prompt, our method first instructs a LLM to deduce new ...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
Enhancing the zero-shot performance of instruction-following models requires heavy computation, eith...
In recent years, there has been significant progress in developing pre-trained language models for N...
In-context learning is a recent paradigm in natural language understanding, where a large pre-traine...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
Since the emergence of large language models, prompt learning has become a popular method for optimi...
Large language models (LLMs) have shown promising performance on various NLP tasks via task promptin...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
This work studies a challenging yet more realistic setting for zero-shot cross-task generalization: ...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Recent works have shown that attaching prompts to the input is effective at conditioning Language Mo...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
Enhancing the zero-shot performance of instruction-following models requires heavy computation, eith...
In recent years, there has been significant progress in developing pre-trained language models for N...
In-context learning is a recent paradigm in natural language understanding, where a large pre-traine...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
Since the emergence of large language models, prompt learning has become a popular method for optimi...
Large language models (LLMs) have shown promising performance on various NLP tasks via task promptin...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
This work studies a challenging yet more realistic setting for zero-shot cross-task generalization: ...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
Few-shot abstractive summarization has become a challenging task in natural language generation. To ...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...