A two-stage training paradigm consisting of sequential pre-training and meta-training stages has been widely used in current few-shot learning (FSL) research. Many of these methods use self-supervised learning and contrastive learning to achieve new state-of-the-art results. However, the potential of contrastive learning in both stages of FSL training paradigm is still not fully exploited. In this paper, we propose a novel contrastive learning-based framework that seamlessly integrates contrastive learning into both stages to improve the performance of few-shot classification. In the pre-training stage, we propose a self-supervised contrastive loss in the forms of feature vector vs. feature map and feature map vs. feature map, which uses gl...
Few-shot text classification has recently been promoted by the meta-learning paradigm which aims to ...
Deep learning has achieved enormous success in various computer tasks. The excellent performance dep...
Few-shot learning focuses on learning a new visual concept with very limited labelled examples. A su...
Few-shot learning aims to train a model with a limited number of base class samples to classify the ...
Few-shot classification requires deep neural networks to learn generalized representations only from...
A primary trait of humans is the ability to learn rich representations and relationships between ent...
Few-shot learning aims to train models that can be generalized to novel classes with only a few samp...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Different from deep learning with large scale supervision, few-shot learning aims to learn the sampl...
Few-shot learning (FSL) aims to recognize target classes by adapting the prior knowledge learned fro...
Despite impressive progress in deep learning, generalizing far beyond the training distribution is a...
Existing few-shot learning (FSL) methods rely on training with a large labeled dataset, which preven...
Few-shot classification aims to adapt to new tasks with limited labeled examples. To fully use the a...
Few-shot learning (FSL) aims to generate a classifier using limited labeled examples. Many existing ...
In this work, metric-based meta-learning models are proposed to learn a generic model embedding that...
Few-shot text classification has recently been promoted by the meta-learning paradigm which aims to ...
Deep learning has achieved enormous success in various computer tasks. The excellent performance dep...
Few-shot learning focuses on learning a new visual concept with very limited labelled examples. A su...
Few-shot learning aims to train a model with a limited number of base class samples to classify the ...
Few-shot classification requires deep neural networks to learn generalized representations only from...
A primary trait of humans is the ability to learn rich representations and relationships between ent...
Few-shot learning aims to train models that can be generalized to novel classes with only a few samp...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
Different from deep learning with large scale supervision, few-shot learning aims to learn the sampl...
Few-shot learning (FSL) aims to recognize target classes by adapting the prior knowledge learned fro...
Despite impressive progress in deep learning, generalizing far beyond the training distribution is a...
Existing few-shot learning (FSL) methods rely on training with a large labeled dataset, which preven...
Few-shot classification aims to adapt to new tasks with limited labeled examples. To fully use the a...
Few-shot learning (FSL) aims to generate a classifier using limited labeled examples. Many existing ...
In this work, metric-based meta-learning models are proposed to learn a generic model embedding that...
Few-shot text classification has recently been promoted by the meta-learning paradigm which aims to ...
Deep learning has achieved enormous success in various computer tasks. The excellent performance dep...
Few-shot learning focuses on learning a new visual concept with very limited labelled examples. A su...