Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast adapt to novel classes with only a few exemplars and meanwhile remain robust to adversarial attacks. The conventional solution for robust MAML is to introduce robustness-promoting regularization during meta-training stage. With such a regularization, previous robust MAML methods simply follow the typical MAML practice that the number of training shots should match with the number of test shots to achieve an optimal adaptation performance. However, although the robustness can be largely improved, previous methods sacrifice clean accuracy a lot. In this paper, we observe that introducing robustness-promoting regularization into MAML reduces the ...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen ...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of ...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
https://arxiv.org/abs/2306.13841 All fields Title Aut...
The high cost of acquiring and annotating samples has made the `few-shot' learning problem of prime ...
Despite their impressive performance on large-scale benchmarks, machine learning sys- tems turn out ...
Modern machine learning (ML) algorithms are being applied today to a rapidly increasing number of ta...
Despite achieving state-of-the-art zero-shot performance, existing vision-language models still fall...
The performance of conventional deep neural networks tends to degrade when a domain shift is introdu...
Deep learning has achieved classification performance matching or exceeding the human one, as long a...
Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of data dist...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen ...
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms now...
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of ...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Optimization-based meta-learning aims to learn an initialization so that a new unseen task can be le...
https://arxiv.org/abs/2306.13841 All fields Title Aut...
The high cost of acquiring and annotating samples has made the `few-shot' learning problem of prime ...
Despite their impressive performance on large-scale benchmarks, machine learning sys- tems turn out ...
Modern machine learning (ML) algorithms are being applied today to a rapidly increasing number of ta...
Despite achieving state-of-the-art zero-shot performance, existing vision-language models still fall...
The performance of conventional deep neural networks tends to degrade when a domain shift is introdu...
Deep learning has achieved classification performance matching or exceeding the human one, as long a...
Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of data dist...
A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-effi...
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which tran...
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen ...