https://arxiv.org/abs/2306.13841 All fields Title Author Abstract Comments Journal reference ACM classification MSC classification Report number arXiv identifier DOI ORCID arXiv author ID Help pages Full text Search Computer Science > Machine Learning [Submitted on 24 Jun 2023] Is Pre-training Truly Better Than Meta-Learning? Brando Miranda, Patrick Yu, Saumya Goyal, Yu-Xiong Wang, Sanmi Koyejo In the context of few-shot learning, it is currently believed that a fixed pre-trained (PT) model, along with fine-tun...
Through experiments on various meta-learning methods, task samplers, and few-shot learning tasks, th...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Recent studies show that task distribution plays a vital role in the meta-learner's performance. Con...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
The field of artificial intelligence has been throughout its history repeatedly inspired by human co...
Deep learning has achieved classification performance matching or exceeding the human one, as long a...
Over the past decade, the field of machine learning has experienced remarkable advancements. While i...
One of the fundamental assumptions of machine learning is that learnt models are applied to data th...
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
Through experiments on various meta-learning methods, task samplers, and few-shot learning tasks, th...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Recent studies show that task distribution plays a vital role in the meta-learner's performance. Con...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
The field of artificial intelligence has been throughout its history repeatedly inspired by human co...
Deep learning has achieved classification performance matching or exceeding the human one, as long a...
Over the past decade, the field of machine learning has experienced remarkable advancements. While i...
One of the fundamental assumptions of machine learning is that learnt models are applied to data th...
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Intelligent agents should have the ability to leverage knowledge from previously learned tasks in or...
Through experiments on various meta-learning methods, task samplers, and few-shot learning tasks, th...
International audienceIn this paper, we review the recent advances in meta-learning theory and show ...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...