Meta-learning aims to leverage experience from previous tasks to achieve an effective and fast adaptation ability when encountering new tasks. However, it is unclear how the generalization property applies to new tasks. Probably approximately correct (PAC) Bayes bound theory provides a theoretical framework to analyze the generalization performance for meta-learning with an explicit numerical generalization error upper bound. A tighter upper bound may achieve better generalization performance. However, for the PAC-Bayes meta-learning bound, the prior distribution is selected randomly which results in poor generalization performance. In this paper, we derive three novel generalization error upper bounds for meta-learning based on the PAC-Bay...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
International audienceWe propose a novel amortized variational inference scheme for an empirical Bay...
There exist many different generalization error bounds for classification. Each of these bounds cont...
Meta-learning can successfully acquire useful inductive biases from data. Yet, its generalization pr...
Meta-learning automatically infers an inductive bias by observing data from a number of related task...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Meta-learning optimizes an inductive bias—typically in the form of the hyperparameters of a base-lea...
In the context of assessing the generalization abilities of a randomized model or learning algorithm...
Meta-Learning promises to enable more data-efficient inference by harnessing previous experience fro...
Nowadays model uncertainty has become one of the most important problems in both academia and indust...
Meta-learning owns unique effectiveness and swiftness in tackling emerging tasks with limited data. ...
Machine learning has achieved impressive feats in numerous domains, largely driven by the emergence ...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
International audienceWe propose a novel amortized variational inference scheme for an empirical Bay...
There exist many different generalization error bounds for classification. Each of these bounds cont...
Meta-learning can successfully acquire useful inductive biases from data. Yet, its generalization pr...
Meta-learning automatically infers an inductive bias by observing data from a number of related task...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Meta-learning optimizes an inductive bias—typically in the form of the hyperparameters of a base-lea...
In the context of assessing the generalization abilities of a randomized model or learning algorithm...
Meta-Learning promises to enable more data-efficient inference by harnessing previous experience fro...
Nowadays model uncertainty has become one of the most important problems in both academia and indust...
Meta-learning owns unique effectiveness and swiftness in tackling emerging tasks with limited data. ...
Machine learning has achieved impressive feats in numerous domains, largely driven by the emergence ...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
International audienceWe propose a novel amortized variational inference scheme for an empirical Bay...
There exist many different generalization error bounds for classification. Each of these bounds cont...