In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that of in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance o...
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks ...
This paper introduces a generalized few-shot segmentation framework with a straightforward training ...
Deep learning models have consistently produced state-of-the-art results on large, labelled datasets...
International audienceIn many real-life problems, it is difficult to acquire or label large amounts ...
Few-shot learning (FSL) methods typically assume clean support sets with accurately labeled samples ...
The generalization power of the pre-trained model is the key for few-shot deep learning. Dropout is ...
In recent works, utilizing a deep network trained on meta-training set serves as a strong baseline i...
Few-shot learning has received increasing attention and witnessed significant advances in recent yea...
Few-shot segmentation aims to devise a generalizing model that segments query images from unseen cla...
Single image-level annotations only correctly describe an often small subset of an image's content, ...
In recent works, utilizing a deep network trained on meta-training set serves as a strong baseline i...
In this paper, we explore the use of GAN-based few-shot data augmentation as a method to improve few...
This paper addresses the issue of dealing with few-shot learning settings in which different classes...
Few-shot learning (FSL) aims to generate a classifier using limited labeled examples. Many existing ...
The goal of few-shot learning is to learn a classifier that can recognize unseen classes from limite...
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks ...
This paper introduces a generalized few-shot segmentation framework with a straightforward training ...
Deep learning models have consistently produced state-of-the-art results on large, labelled datasets...
International audienceIn many real-life problems, it is difficult to acquire or label large amounts ...
Few-shot learning (FSL) methods typically assume clean support sets with accurately labeled samples ...
The generalization power of the pre-trained model is the key for few-shot deep learning. Dropout is ...
In recent works, utilizing a deep network trained on meta-training set serves as a strong baseline i...
Few-shot learning has received increasing attention and witnessed significant advances in recent yea...
Few-shot segmentation aims to devise a generalizing model that segments query images from unseen cla...
Single image-level annotations only correctly describe an often small subset of an image's content, ...
In recent works, utilizing a deep network trained on meta-training set serves as a strong baseline i...
In this paper, we explore the use of GAN-based few-shot data augmentation as a method to improve few...
This paper addresses the issue of dealing with few-shot learning settings in which different classes...
Few-shot learning (FSL) aims to generate a classifier using limited labeled examples. Many existing ...
The goal of few-shot learning is to learn a classifier that can recognize unseen classes from limite...
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks ...
This paper introduces a generalized few-shot segmentation framework with a straightforward training ...
Deep learning models have consistently produced state-of-the-art results on large, labelled datasets...