Accepted at ECCV 2020International audienceLifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks --a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovat...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Incremental learning (IL) enables the adaptation of artificial agents to dynamic environments in whi...
In this paper, we propose a novel training procedure for the continual representation learning probl...
Accepted at ECCV 2020International audienceLifelong learning has attracted much attention, but exist...
We study class-incremental learning, a training setup in which new classes of data are observed over...
International audienceThe ability of artificial agents to increment their capabilities when confront...
The ability of artificial agents to increment their capabilities when confronted with new data is an...
Exemplar-free incremental learning is extremely challenging due to inaccessibility of data from old ...
Humans learn incrementally from sequential experiences throughout their lives, which has proven hard...
International audienceIn class incremental learning, discriminative models are trained to classify i...
The continual learning (CL) paradigm aims to enable neural networks to learn tasks continually in a ...
Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine lea...
In this paper, we propose a novel training procedure for the continual representation learning probl...
Catastrophic forgetting is a key challenge for class-incremental learning with deep neural networks,...
The continual learning (CL) paradigm aims to enable neural networks to learn tasks continually in a ...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Incremental learning (IL) enables the adaptation of artificial agents to dynamic environments in whi...
In this paper, we propose a novel training procedure for the continual representation learning probl...
Accepted at ECCV 2020International audienceLifelong learning has attracted much attention, but exist...
We study class-incremental learning, a training setup in which new classes of data are observed over...
International audienceThe ability of artificial agents to increment their capabilities when confront...
The ability of artificial agents to increment their capabilities when confronted with new data is an...
Exemplar-free incremental learning is extremely challenging due to inaccessibility of data from old ...
Humans learn incrementally from sequential experiences throughout their lives, which has proven hard...
International audienceIn class incremental learning, discriminative models are trained to classify i...
The continual learning (CL) paradigm aims to enable neural networks to learn tasks continually in a ...
Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine lea...
In this paper, we propose a novel training procedure for the continual representation learning probl...
Catastrophic forgetting is a key challenge for class-incremental learning with deep neural networks,...
The continual learning (CL) paradigm aims to enable neural networks to learn tasks continually in a ...
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without ...
Incremental learning (IL) enables the adaptation of artificial agents to dynamic environments in whi...
In this paper, we propose a novel training procedure for the continual representation learning probl...