Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem. In this work, we demonstrate that, despite its advantages on low data regimes, finetuned prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inference heuristics based on lexical overlap, e.g., models incorrectly assuming a sentence pair is of the same meaning because they consist of the same set of words. Interestingly, we find that this particular inference heuristic is significantly less present in the zero-shot evaluation of the prompt-based model, indicating how finetuning can be destructive to useful ...
Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either ...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability...
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapo...
When primed with only a handful of training samples, very large, pretrained language models such as ...
Pretraining deep neural networks to perform language modeling - that is, to reconstruct missing word...
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either ...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability...
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapo...
When primed with only a handful of training samples, very large, pretrained language models such as ...
Pretraining deep neural networks to perform language modeling - that is, to reconstruct missing word...
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by...
We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstre...
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either ...