One of the most impressive results of recent NLP history is the ability of pre-trained language models to solve new tasks in a zero-shot setting. To achieve this, NLP tasks are framed as natural language prompts, generating a response indicating the predicted output. Nonetheless, the performance in such settings often lags far behind its supervised counterpart, suggesting a large space for potential improvement. In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance. Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency, encouraging consistent predictions over this diverse set of prompts. Our method makes it p...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language mode...
NLP has yielded results that were unimaginable only a few years ago on a wide range of real-world ta...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Natural language prompts have been shown to facilitate cross-task generalization for large language ...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
There is a growing interest in dataset generation recently due to the superior generative capacity o...
Prompt-based classifiers are an attractive approach for zero-shot classification. However, the preci...
This work studies a challenging yet more realistic setting for zero-shot cross-task generalization: ...
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language proce...
We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on tas...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Large-scale pre-trained language models have contributed significantly to natural language processin...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language mode...
NLP has yielded results that were unimaginable only a few years ago on a wide range of real-world ta...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Natural language prompts have been shown to facilitate cross-task generalization for large language ...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
There is a growing interest in dataset generation recently due to the superior generative capacity o...
Prompt-based classifiers are an attractive approach for zero-shot classification. However, the preci...
This work studies a challenging yet more realistic setting for zero-shot cross-task generalization: ...
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language proce...
We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on tas...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Large-scale pre-trained language models have contributed significantly to natural language processin...
This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization fo...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language mode...
NLP has yielded results that were unimaginable only a few years ago on a wide range of real-world ta...