Gradient-based meta-learning techniques aim to distill useful prior knowledge from a set of training tasks such that new tasks can be learned more efficiently with gradient descent. While these methods have achieved successes in various scenarios, they commonly adapt all parameters of trainable layers when learning new tasks. This neglects potentially more efficient learning strategies for a given task distribution and may be susceptible to overfitting, especially in few-shot learning where tasks must be learned from a limited number of examples. To address these issues, we propose Subspace Adaptation Prior (SAP), a novel gradient-based meta-learning algorithm that jointly learns good initialization parameters (prior knowledge) and layer-wi...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Despite achieving state-of-the-art zero-shot performance, existing vision-language models still fall...
Meta-Learning algorithms for few-shot learning aim to train neural networks capable of generalizing ...
In this work we provide an analysis of the distribution of the post-adaptation parameters of Gradien...
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed f...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Finding neural network weights that generalize well from small datasets is difficult. A promising ap...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
The ability to learn quickly from a few samples is a vital element of intelligence. Humans can reuse...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
Recent developments in few-shot learning have shown that during fast adaption, gradient-based meta-l...
Few-shot learning for neural networks (NNs) is an important problem that aims to train NNs with a fe...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
MasterDeep learning has been tremendously successful in many difficult tasks including image classi...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Despite achieving state-of-the-art zero-shot performance, existing vision-language models still fall...
Meta-Learning algorithms for few-shot learning aim to train neural networks capable of generalizing ...
In this work we provide an analysis of the distribution of the post-adaptation parameters of Gradien...
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed f...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Finding neural network weights that generalize well from small datasets is difficult. A promising ap...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
The ability to learn quickly from a few samples is a vital element of intelligence. Humans can reuse...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
Recent developments in few-shot learning have shown that during fast adaption, gradient-based meta-l...
Few-shot learning for neural networks (NNs) is an important problem that aims to train NNs with a fe...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
MasterDeep learning has been tremendously successful in many difficult tasks including image classi...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Despite achieving state-of-the-art zero-shot performance, existing vision-language models still fall...
Meta-Learning algorithms for few-shot learning aim to train neural networks capable of generalizing ...