Building models of natural language processing (NLP) is challenging in low-resource scenarios where only limited data are available. Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks. Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks. To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation. Specifically, we introduce a task-specific memory module to store support set information and constru...
Pretrained language models have shown success in various areas of natural language processing, inclu...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
The rise of deep neural networks has led to several breakthroughs for semantic segmentation. In spit...
Meta learning have achieved promising performance in low-resource text classification which aims to ...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
Deep learning has been the mainstream technique in natural language processing (NLP) area. However, ...
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-...
Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream ta...
Neural text matching models have been used in a range of applications such as question answering and...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
Neural machine translation (NMT) models have achieved state-of-the-art translation quality with a la...
The transformer architecture and variants presented remarkable success across many machine learning ...
Text style transfer (TST) without parallel data has achieved some practical success. However, most o...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
This paper studies multi-task training of retrieval-augmented generation models for knowledge-intens...
Pretrained language models have shown success in various areas of natural language processing, inclu...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
The rise of deep neural networks has led to several breakthroughs for semantic segmentation. In spit...
Meta learning have achieved promising performance in low-resource text classification which aims to ...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
Deep learning has been the mainstream technique in natural language processing (NLP) area. However, ...
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-...
Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream ta...
Neural text matching models have been used in a range of applications such as question answering and...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
Neural machine translation (NMT) models have achieved state-of-the-art translation quality with a la...
The transformer architecture and variants presented remarkable success across many machine learning ...
Text style transfer (TST) without parallel data has achieved some practical success. However, most o...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
This paper studies multi-task training of retrieval-augmented generation models for knowledge-intens...
Pretrained language models have shown success in various areas of natural language processing, inclu...
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We fo...
The rise of deep neural networks has led to several breakthroughs for semantic segmentation. In spit...