In-context learning (ICL) i.e. showing LLMs only a few task-specific demonstrations has led to downstream gains with no task-specific fine-tuning required. However, LLMs are sensitive to the choice of prompts, and therefore a crucial research question is how to select good demonstrations for ICL. One effective strategy is leveraging semantic similarity between the ICL demonstrations and test inputs by using a text retriever, which however is sub-optimal as that does not consider the LLM's existing knowledge about that task. From prior work (Min et al., 2022), we already know that labels paired with the demonstrations bias the model predictions. This leads us to our hypothesis whether considering LLM's existing knowledge about the task, espe...
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, whe...
Humans possess a remarkable ability to assign novel interpretations to linguistic expressions, enabl...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
In-context learning (ICL) has become the default method for using large language models (LLMs), maki...
In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks ...
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveragin...
Ever since the development of GPT-3 in the natural language processing (NLP) field, in-context learn...
In-context learning is a recent paradigm in natural language understanding, where a large pre-traine...
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
With a handful of demonstration examples, large-scale language models show strong capability to perf...
Pre-trained models of source code have gained widespread popularity in many code intelligence tasks....
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in...
Large language models (LLMs) exhibit remarkable performance improvement through in-context learning ...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, whe...
Humans possess a remarkable ability to assign novel interpretations to linguistic expressions, enabl...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
In-context learning (ICL) has become the default method for using large language models (LLMs), maki...
In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks ...
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveragin...
Ever since the development of GPT-3 in the natural language processing (NLP) field, in-context learn...
In-context learning is a recent paradigm in natural language understanding, where a large pre-traine...
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
With a handful of demonstration examples, large-scale language models show strong capability to perf...
Pre-trained models of source code have gained widespread popularity in many code intelligence tasks....
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in...
Large language models (LLMs) exhibit remarkable performance improvement through in-context learning ...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, whe...
Humans possess a remarkable ability to assign novel interpretations to linguistic expressions, enabl...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...