Prompt-based classifiers are an attractive approach for zero-shot classification. However, the precise choice of the prompt template and label words can largely influence performance, with semantically equivalent settings often showing notable performance difference. This discrepancy can be partly attributed to word biases, where the classifier may be biased towards classes. To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers. This paper instead approaches this problem by examining the expected marginal probabilities of the classes. Here, probabilities are reweighted to have a uniform prior over classes, in an unsuper...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLM...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
Natural language prompts have been shown to facilitate cross-task generalization for large language ...
Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguist...
In this project, we want to explore the newly emerging field of prompt engineering and apply it to t...
Can we construct a neural language model which is inductively biased towards learning human language...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
When primed with only a handful of training samples, very large, pretrained language models such as ...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Robust Natural Language Processing systems must be able to handle words that are not in their lexico...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Existing solutions to zero-shot text classification either conduct prompting with pre-trained langu...
Recent prompt-based approaches allow pretrained language models to achieve strong performances on fe...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLM...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
Natural language prompts have been shown to facilitate cross-task generalization for large language ...
Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguist...
In this project, we want to explore the newly emerging field of prompt engineering and apply it to t...
Can we construct a neural language model which is inductively biased towards learning human language...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
When primed with only a handful of training samples, very large, pretrained language models such as ...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Robust Natural Language Processing systems must be able to handle words that are not in their lexico...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Existing solutions to zero-shot text classification either conduct prompting with pre-trained langu...
Recent prompt-based approaches allow pretrained language models to achieve strong performances on fe...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLM...