We study a practical setting of continual learning: fine-tuning on a pre-trained model continually. Previous work has found that, when training on new tasks, the features (penultimate layer representations) of previous data will change, called representational shift. Besides the shift of features, we reveal that the intermediate layers' representational shift (IRS) also matters since it disrupts batch normalization, which is another crucial cause of catastrophic forgetting. Motivated by this, we propose ConFiT, a fine-tuning method incorporating two components, cross-convolution batch normalization (Xconv BN) and hierarchical fine-tuning. Xconv BN maintains pre-convolution running means instead of post-convolution, and recovers post-convolu...
Plastic neural networks have the ability to adapt to new tasks. However, in a continual learning set...
Using task-specific components within a neural network in continual learning (CL) is a compelling st...
Abstract Learning continually without forgetting might be one of the ultimate goals for building ar...
We propose a novel continual learning method called Residual Continual Learning (ResCL). Our method ...
The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the ...
Continual Learning (CL) is the research field addressing learning without forgetting when the data d...
Deep learning has enjoyed tremendous success over the last decade, but the training of practically u...
Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catas...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Continual learning entails learning a sequence of tasks and balancing their knowledge appropriately....
Work on continual learning (CL) has largely focused on the problems arising from the dynamically-cha...
Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data ...
Continuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a p...
Continual learning aims to provide intelligent agents that are capable of learning continually a seq...
Human beings tend to incrementally learn from the rapidly changing environment without comprising or...
Plastic neural networks have the ability to adapt to new tasks. However, in a continual learning set...
Using task-specific components within a neural network in continual learning (CL) is a compelling st...
Abstract Learning continually without forgetting might be one of the ultimate goals for building ar...
We propose a novel continual learning method called Residual Continual Learning (ResCL). Our method ...
The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the ...
Continual Learning (CL) is the research field addressing learning without forgetting when the data d...
Deep learning has enjoyed tremendous success over the last decade, but the training of practically u...
Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catas...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Continual learning entails learning a sequence of tasks and balancing their knowledge appropriately....
Work on continual learning (CL) has largely focused on the problems arising from the dynamically-cha...
Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data ...
Continuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a p...
Continual learning aims to provide intelligent agents that are capable of learning continually a seq...
Human beings tend to incrementally learn from the rapidly changing environment without comprising or...
Plastic neural networks have the ability to adapt to new tasks. However, in a continual learning set...
Using task-specific components within a neural network in continual learning (CL) is a compelling st...
Abstract Learning continually without forgetting might be one of the ultimate goals for building ar...