I first review the existing methods based on regularization for continual learning. While regularizing a model's probabilities is very efficient to reduce forgetting in large-scale datasets, there are few works considering constraints on intermediate features. I cover in this chapter two contributions aiming to regularize directly the latent space of ConvNet. The first one, PODNet, aims to reduce the drift of spatial statistics between the old and new model, which in effect reduces drastically forgetting of old classes while enabling efficient learning of new classes. I show in a second part a complementary method where we avoid pre-emptively forgetting by allocating locations in the latent space for yet unseen future class. Then, I describ...
Continual learning of deep neural networks is a key requirement for scaling them up to more complex ...
Continual learning of deep neural networks is a key requirement for scaling them up to more complex ...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Depuis le début des années 2010 la recherche en apprentissage automatique a orienté son attention ve...
International audienceIn the context of online continual learning, artificial neural networks for im...
We are interested in the problem of continual learning of artificial neural networks in the case whe...
This thesis deals with deep learning applied to image classification tasks. The primary motivation f...
Incremental learning introduces a new learning paradigm for artificial neural networks. It aims at d...
Humans learn all their life long. They accumulate knowledge from a sequence of learning experiences ...
This thesis tackles some of the scientific locks of perception systems based on neural networks for ...
For a decade now, convolutional deep neural networks have demonstrated their ability to produce exce...
Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catas...
In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challengin...
Convolutional neural networks show remarkable results in classification but struggle with learning n...
The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the ...
Continual learning of deep neural networks is a key requirement for scaling them up to more complex ...
Continual learning of deep neural networks is a key requirement for scaling them up to more complex ...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Depuis le début des années 2010 la recherche en apprentissage automatique a orienté son attention ve...
International audienceIn the context of online continual learning, artificial neural networks for im...
We are interested in the problem of continual learning of artificial neural networks in the case whe...
This thesis deals with deep learning applied to image classification tasks. The primary motivation f...
Incremental learning introduces a new learning paradigm for artificial neural networks. It aims at d...
Humans learn all their life long. They accumulate knowledge from a sequence of learning experiences ...
This thesis tackles some of the scientific locks of perception systems based on neural networks for ...
For a decade now, convolutional deep neural networks have demonstrated their ability to produce exce...
Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catas...
In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challengin...
Convolutional neural networks show remarkable results in classification but struggle with learning n...
The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the ...
Continual learning of deep neural networks is a key requirement for scaling them up to more complex ...
Continual learning of deep neural networks is a key requirement for scaling them up to more complex ...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...