Existing solutions to zero-shot text classification either conduct prompting with pre-trained language models, which is sensitive to the choices of templates, or rely on large-scale annotated data of relevant tasks for meta-tuning. In this work, we propose a new paradigm based on self-supervised learning to solve zero-shot text classification tasks by tuning the language models with unlabeled data, called self-supervised tuning. By exploring the inherent structure of free texts, we propose a new learning objective called first sentence prediction to bridge the gap between unlabeled data and text classification tasks. After tuning the model to learn to predict the first sentence in a paragraph based on the rest, the model is able to...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
We study the problem of generating a training-free task-dependent visual classifier from text descri...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Our research focuses on solving the zero-shot text classification problem in NLP, with a particular ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Traditional text classification approaches often require a good amount of labeled data, which is dif...
Classifying a visual concept merely from its associated online textual source, such as a Wikipedia a...
Existing Zero-Shot Learning (ZSL) techniques for text classification typically assign a label to a p...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
There is a growing interest in dataset generation recently due to the superior generative capacity o...
We propose a semi-supervised bootstrap learning framework for few-shot text classification. From a s...
Text classification aims to assign predefined labels to unlabeled sentences, which tend to struggle ...
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classifica...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
We study the problem of generating a training-free task-dependent visual classifier from text descri...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Our research focuses on solving the zero-shot text classification problem in NLP, with a particular ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Traditional text classification approaches often require a good amount of labeled data, which is dif...
Classifying a visual concept merely from its associated online textual source, such as a Wikipedia a...
Existing Zero-Shot Learning (ZSL) techniques for text classification typically assign a label to a p...
Nowadays, owing to the superior capacity of the large pre-trained language models (PLM), the PLM-bas...
There is a growing interest in dataset generation recently due to the superior generative capacity o...
We propose a semi-supervised bootstrap learning framework for few-shot text classification. From a s...
Text classification aims to assign predefined labels to unlabeled sentences, which tend to struggle ...
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classifica...
Recent few-shot learning methods, such as parameter-efficient fine-tuning (PEFT) and pattern exploit...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
One of the most impressive results of recent NLP history is the ability of pre-trained language mode...
We study the problem of generating a training-free task-dependent visual classifier from text descri...