In this work we provide an analysis of the distribution of the post-adaptation parameters of Gradient-Based Meta-Learning (GBML) methods. Previous work has noticed how, for the case of image-classification, this adaptation only takes place on the last layers of the network. We propose the more general notion that parameters are updated over a low-dimensional \emph{subspace} of the same dimensionality as the task-space and show that this holds for regression as well. Furthermore, the induced subspace structure provides a method to estimate the intrinsic dimension of the space of tasks of common few-shot learning datasets
The State of the Art of the young domain of Meta-Learning [3] is held by the connectionist approach....
We introduce kernels with random Fourier features in the meta-learning framework for few-shot learni...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Gradient-based meta-learning techniques aim to distill useful prior knowledge from a set of training...
MasterDeep learning has been tremendously successful in many difficult tasks including image classi...
Finding neural network weights that generalize well from small datasets is difficult. A promising ap...
Dans ce mémoire, nous étudions la généralisation des réseaux de neurones dans le contexte du méta-ap...
Data representation is integral to meta-learning and is effectively done using kernels. Good perfor...
Recent developments in few-shot learning have shown that during fast adaption, gradient-based meta-l...
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed f...
Meta-learning is widely used in few-shot classification and function regression due to its ability t...
Given the "right" representation, learning is easy. This thesis studies representation learning and ...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Proceedings, Part XXInternational audienceIn this paper, we consider the framework of multi-task rep...
The State of the Art of the young domain of Meta-Learning [3] is held by the connectionist approach....
We introduce kernels with random Fourier features in the meta-learning framework for few-shot learni...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Gradient-based meta-learning techniques aim to distill useful prior knowledge from a set of training...
MasterDeep learning has been tremendously successful in many difficult tasks including image classi...
Finding neural network weights that generalize well from small datasets is difficult. A promising ap...
Dans ce mémoire, nous étudions la généralisation des réseaux de neurones dans le contexte du méta-ap...
Data representation is integral to meta-learning and is effectively done using kernels. Good perfor...
Recent developments in few-shot learning have shown that during fast adaption, gradient-based meta-l...
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed f...
Meta-learning is widely used in few-shot classification and function regression due to its ability t...
Given the "right" representation, learning is easy. This thesis studies representation learning and ...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Proceedings, Part XXInternational audienceIn this paper, we consider the framework of multi-task rep...
The State of the Art of the young domain of Meta-Learning [3] is held by the connectionist approach....
We introduce kernels with random Fourier features in the meta-learning framework for few-shot learni...
A natural progression in machine learning research is to automate and learn from data increasingly m...