Inductive transfer learning aims to learn from a small amount of training data for the target task by utilizing a pre-trained model from the source task. Most strategies that involve large-scale deep learning models adopt initialization with the pre-trained model and fine-tuning for the target task. However, when using over-parameterized models, we can often prune the model without sacrificing the accuracy of the source task. This motivates us to adopt model pruning for transfer learning with deep learning models. In this paper, we propose PAC-Net, a simple yet effective approach for transfer learning based on pruning. PAC-Net consists of three steps: Prune, Allocate, and Calibrate (PAC). The main idea behind these steps is to identify esse...
Deep learning requires a large amount of datasets to train deep neural network models for specific t...
Deep transfer learning recently has acquired significant research interest. It makes use of pre-trai...
Traditional machine learning makes a basic assumption: the training and test data should be under th...
International audienceIn inductive transfer learning, fine-tuning pre-trained convolutional networks...
The increasing of pre-trained models has significantly facilitated the performance on limited data t...
Inductive learners seek meaningful features within raw input. Their purpose is to accurately categor...
Inductive learners seek meaningful features within raw input. Their purpose is to accurately categor...
People constantly apply acquired knowledge to new learning tasks, but machines almost never do. Res...
The traditional way of obtaining models from data, inductive learning, has proved itself both in the...
The traditional way of obtaining models from data, inductive learning, has proved itself both in the...
International audienceFine-tuning pre-trained deep networks is a practical way of benefiting from th...
Deep learning method, convolutional neural network (CNN) outperforms conventional machine learning m...
International audienceIn inductive transfer learning, fine-tuning pre-trained convolutional networks...
Pretrained models could be reused in a way that allows for improvement in training accuracy. Trainin...
In this work, authors compare training time of standard convolution neuron network model with model ...
Deep learning requires a large amount of datasets to train deep neural network models for specific t...
Deep transfer learning recently has acquired significant research interest. It makes use of pre-trai...
Traditional machine learning makes a basic assumption: the training and test data should be under th...
International audienceIn inductive transfer learning, fine-tuning pre-trained convolutional networks...
The increasing of pre-trained models has significantly facilitated the performance on limited data t...
Inductive learners seek meaningful features within raw input. Their purpose is to accurately categor...
Inductive learners seek meaningful features within raw input. Their purpose is to accurately categor...
People constantly apply acquired knowledge to new learning tasks, but machines almost never do. Res...
The traditional way of obtaining models from data, inductive learning, has proved itself both in the...
The traditional way of obtaining models from data, inductive learning, has proved itself both in the...
International audienceFine-tuning pre-trained deep networks is a practical way of benefiting from th...
Deep learning method, convolutional neural network (CNN) outperforms conventional machine learning m...
International audienceIn inductive transfer learning, fine-tuning pre-trained convolutional networks...
Pretrained models could be reused in a way that allows for improvement in training accuracy. Trainin...
In this work, authors compare training time of standard convolution neuron network model with model ...
Deep learning requires a large amount of datasets to train deep neural network models for specific t...
Deep transfer learning recently has acquired significant research interest. It makes use of pre-trai...
Traditional machine learning makes a basic assumption: the training and test data should be under th...