Natural Language Inference Generation task is to generate a text hypothesis given a text premise and a logical relation between the two. This task can be used in data augmentation and controllable text generation in practice. In this paper, we propose language models with prompt and dynamic demonstration (LM-PDD) to tackle this problem in few-shot settings. Our framework outperforms standard fine-tuned models with low resource, achieving an average 8% absolute improvement on SNLI and MNLI datasets, and the results on 13 natural language classification tasks also show that our dynamic demonstration method has good generalizability.Comment: 13 page
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have be...
We have recently begun a project to develop a more effective and efficient way to marshal inferences...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Natural language generation from structured data mainly focuses on surface-level descriptions, suffe...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Large-scale pre-trained language models have contributed significantly to natural language processin...
In recent years, there has been significant progress in developing pre-trained language models for N...
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on rep...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Do state-of-the-art models for language understanding already have, or can they easily learn, abilit...
© 2019 Association for Computational Linguistics The task of Natural Language Inference (NLI) is wid...
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have be...
We have recently begun a project to develop a more effective and efficient way to marshal inferences...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Natural language generation from structured data mainly focuses on surface-level descriptions, suffe...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
Large-scale pre-trained language models have contributed significantly to natural language processin...
In recent years, there has been significant progress in developing pre-trained language models for N...
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on rep...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Do state-of-the-art models for language understanding already have, or can they easily learn, abilit...
© 2019 Association for Computational Linguistics The task of Natural Language Inference (NLI) is wid...
Working on a larger, more general topic: «Large Language Models (LLMs). Learning and Reasoning at th...
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...