Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of learning tasks in a way that primes the model for few-shot learning of new tasks. The MAML algorithm performs well on few-shot learning problems in classification, regression, and fine-tuning of policy gradients in reinforcement learning, but comes with the need for costly hyperparameter tuning for training stability. We address this shortcoming by introducing an extension to MAML, called Alpha MAML, to incorporate an online hyperparameter adaptation scheme that eliminates the need to tune meta-learning and learning rates. Our results with the Omniglot database demonstrate a substantial reduction in the need to tune MAML training hyperparamet...
International audienceMeta-learning algorithms can accelerate the model-based reinforcement learning...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
The exponential growth of volume, variety and velocity of the data is raising the need for investiga...
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of ...
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen ...
The performance of conventional deep neural networks tends to degrade when a domain shift is introdu...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast a...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
Model Agnostic Meta-Learning (MAML) is widely used to find a good initialization for a family of tas...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed f...
Recent developments in few-shot learning have shown that during fast adaption, gradient-based meta-l...
International audienceMeta-learning algorithms can accelerate the model-based reinforcement learning...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
The exponential growth of volume, variety and velocity of the data is raising the need for investiga...
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of ...
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen ...
The performance of conventional deep neural networks tends to degrade when a domain shift is introdu...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast a...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
Model Agnostic Meta-Learning (MAML) is widely used to find a good initialization for a family of tas...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
When experience is scarce, models may have insufficient information to adapt to a new task. In this ...
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed f...
Recent developments in few-shot learning have shown that during fast adaption, gradient-based meta-l...
International audienceMeta-learning algorithms can accelerate the model-based reinforcement learning...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
The exponential growth of volume, variety and velocity of the data is raising the need for investiga...