This paper proposes a meta-learning approach to evolving a parametrized loss function, which is called Meta-Loss Network (MLN), for training the image classification learning on small datasets. In our approach, the MLN is embedded in the framework of classification learning as a differentiable objective function. The MLN is evolved with the Evolutionary Strategy algorithm (ES) to an optimized loss function, such that a classifier, which optimized to minimize this loss, will achieve a good generalization effect. A classifier learns on a small training dataset to minimize MLN with Stochastic Gradient Descent (SGD), and then the MLN is evolved with the precision of the small-dataset-updated classifier on a large validation dataset. In order to...
Added experiments with different network architectures and input image resolutionsInternational audi...
Meta-learning, or learning to learn, has become well-known in the field of artificial intelligence a...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
The standard training for deep neural networks relies on a global and fixed loss function. For more ...
In this paper, we develop upon the emerging topic of loss function learning, which aims to learn los...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Meta-Learning, or so-called Learning to learn, has become another important research branch in Machi...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Meta-learning aims to teach the machine how to learn. Embedding model-based meta-learning performs w...
Deep neural networks (DNNs) for social image classification are prone to performance reduction and o...
The necessity to use very large datasets in order to train Generative Adversarial Networks (GANs) ha...
This work aims at developing a generalizable Magnetic Resonance Imaging (MRI) reconstruction method ...
Over the past decade, the field of machine learning has experienced remarkable advancements. While i...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
Added experiments with different network architectures and input image resolutionsInternational audi...
Meta-learning, or learning to learn, has become well-known in the field of artificial intelligence a...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
The standard training for deep neural networks relies on a global and fixed loss function. For more ...
In this paper, we develop upon the emerging topic of loss function learning, which aims to learn los...
Despite huge progress in artificial intelligence, the ability to quickly learn from few examples is ...
Meta-Learning, or so-called Learning to learn, has become another important research branch in Machi...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Meta-learning aims to teach the machine how to learn. Embedding model-based meta-learning performs w...
Deep neural networks (DNNs) for social image classification are prone to performance reduction and o...
The necessity to use very large datasets in order to train Generative Adversarial Networks (GANs) ha...
This work aims at developing a generalizable Magnetic Resonance Imaging (MRI) reconstruction method ...
Over the past decade, the field of machine learning has experienced remarkable advancements. While i...
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We...
Added experiments with different network architectures and input image resolutionsInternational audi...
Meta-learning, or learning to learn, has become well-known in the field of artificial intelligence a...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....