The performance of conventional deep neural networks tends to degrade when a domain shift is introduced, such as collecting data from a new site. Model-Agnostic Meta-Learning, or MAML, has achieved state-of-the-art performance in few-shot learning by finding initial parameters that adapt easily for new tasks. This thesis studies MAML in a digital pathology setting. Experiments show that a conventional model generalises poorly to data collected from another site. By annotating a few samples during inference however, a model with initial parameters obtained through MAML training can adapt to achieve better generalisation performance. It is also demonstrated that a simple transfer learning approach using a kNN classifier on features extracted ...
Deep neural networks can achieve great successes when presented with large data sets and sufficient ...
We demonstrate the applicability of model-agnostic algorithms for meta-learning, specifically Reptil...
Domain shift refers to the well known problem that a model trained in one source domain performs poo...
The performance of conventional deep neural networks tends to degrade when a domain shift is introdu...
Clinical deployment of systems based on deep neural networks is hampered by sensitivity to domain sh...
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of ...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
When applying transfer learning for medical image analysis, downstream tasks often have significant ...
Simple Summary Pathology is a cornerstone in cancer diagnostics, and digital pathology and artificia...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Over the past decade, the field of machine learning has experienced remarkable advancements. While i...
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability...
Widely used traditional supervised deep learning methods require a large number of training samples ...
In: Encyclopedia of Systems Biology, W. Dubitzky, O. Wolkenhauer, K-H Cho, H. Yokota (Eds.), Springe...
Meta-learning, or learning to learn, is an emerging field within artificial intelligence (AI) that e...
Deep neural networks can achieve great successes when presented with large data sets and sufficient ...
We demonstrate the applicability of model-agnostic algorithms for meta-learning, specifically Reptil...
Domain shift refers to the well known problem that a model trained in one source domain performs poo...
The performance of conventional deep neural networks tends to degrade when a domain shift is introdu...
Clinical deployment of systems based on deep neural networks is hampered by sensitivity to domain sh...
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of ...
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years a...
When applying transfer learning for medical image analysis, downstream tasks often have significant ...
Simple Summary Pathology is a cornerstone in cancer diagnostics, and digital pathology and artificia...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Over the past decade, the field of machine learning has experienced remarkable advancements. While i...
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability...
Widely used traditional supervised deep learning methods require a large number of training samples ...
In: Encyclopedia of Systems Biology, W. Dubitzky, O. Wolkenhauer, K-H Cho, H. Yokota (Eds.), Springe...
Meta-learning, or learning to learn, is an emerging field within artificial intelligence (AI) that e...
Deep neural networks can achieve great successes when presented with large data sets and sufficient ...
We demonstrate the applicability of model-agnostic algorithms for meta-learning, specifically Reptil...
Domain shift refers to the well known problem that a model trained in one source domain performs poo...