Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks: Unidirectional PLMs (e.g., GPT) are well known for their superior text generation capabilities; bidirectional PLMs (e.g., BERT) have been the prominent choice for natural language understanding (NLU) tasks. While both types of models have achieved promising few-shot learning performance, their potential for zero-shot learning has been underexplored. In this paper, we present a simple approach that uses both types of PLMs for fully zero-shot learning of NLU tasks without requiring any task-specific data: A unidirectional PLM generates class-conditioned texts guided by prompts, which are used as the training data for fine-t...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
There is a growing interest in dataset generation recently due to the superior generative capacity o...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
The task of data-to-text generation amounts to describing structured data, such as RDF triples, in f...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Traditional text classification approaches often require a good amount of labeled data, which is dif...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Pretraining deep neural networks to perform language modeling - that is, to reconstruct missing word...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
There is a growing interest in dataset generation recently due to the superior generative capacity o...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
The task of data-to-text generation amounts to describing structured data, such as RDF triples, in f...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Traditional text classification approaches often require a good amount of labeled data, which is dif...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Pretraining deep neural networks to perform language modeling - that is, to reconstruct missing word...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...