In recent years, the community of natural language processing (NLP) has seen amazing progress in the development of pre-trained language models (PLMs). The novel paradigm of PLMs does not require labeled data, allowing us to experiment with increased training scale through employing freely available colossal online self-training corpus to push the limits. Language models (LMs), such as GPT, BERT and T5, have achieved high performance on a wide range of NLP tasks. Meanwhile, research on zero-shot and few-shot text classification has received increasing attention. As labelling can be costly and time-consuming, how to perform data augmentation (DA) and enhance the current framework in a more effective and automatic way can be challenging. Exis...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Existing Zero-Shot Learning (ZSL) techniques for text classification typically assign a label to a p...
SOTA language models have demonstrated remarkable capabilities in tackling NLP tasks they have not b...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Data augmentation techniques are widely used for enhancing the performance of machine learning model...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Prompt-based learning has shown its effectiveness in few-shot text classification. One important fac...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Our research focuses on solving the zero-shot text classification problem in NLP, with a particular ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Data augmentation has been an important ingredient for boosting performances of learned models. Prio...
Traditional text classification approaches often require a good amount of labeled data, which is dif...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
In many cases of machine learning, research suggests that the development of training data might hav...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Existing Zero-Shot Learning (ZSL) techniques for text classification typically assign a label to a p...
SOTA language models have demonstrated remarkable capabilities in tackling NLP tasks they have not b...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Data augmentation techniques are widely used for enhancing the performance of machine learning model...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Prompt-based learning has shown its effectiveness in few-shot text classification. One important fac...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Our research focuses on solving the zero-shot text classification problem in NLP, with a particular ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Data augmentation has been an important ingredient for boosting performances of learned models. Prio...
Traditional text classification approaches often require a good amount of labeled data, which is dif...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
In many cases of machine learning, research suggests that the development of training data might hav...
Providing pretrained language models with simple task descriptions in natural language enables them ...
Existing Zero-Shot Learning (ZSL) techniques for text classification typically assign a label to a p...
SOTA language models have demonstrated remarkable capabilities in tackling NLP tasks they have not b...