Neural networks are very powerful computational models, capable of outperforming humans on a variety of tasks. However, unlike humans, these networks tend to catastrophically forget previous information when learning new information. This thesis aims to solve this catastrophic forgetting problem, so that a deep neural network model can sequentially learn a number of complex reinforcement learning tasks. The primary model proposed by this thesis, termed RePR, prevents catastrophic forgetting by introducing a generative model and a dual memory system. The generative model learns to produce data representative of previously seen tasks. This generated data is rehearsed, while learning a new task, through a process called pseudo-rehearsal. This ...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
ii Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neura...
Humans learn all their life long. They accumulate knowledge from a sequence of learning experiences ...
Neural networks are very powerful computational models, capable of outperforming humans on a variety...
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intel...
Funder: International Brain Research Organization (IBRO); doi: https://doi.org/10.13039/501100001675...
Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neural n...
Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neural n...
Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), w...
Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), w...
Artificial neural networks are promising as general function approximators but challenging to train ...
With the capacity of continual learning, humans can continuously acquire knowledge throughout their ...
Deep neural networks are used in many state-of-the-art systems for machine perception. Once a networ...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
ii Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neura...
Humans learn all their life long. They accumulate knowledge from a sequence of learning experiences ...
Neural networks are very powerful computational models, capable of outperforming humans on a variety...
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intel...
Funder: International Brain Research Organization (IBRO); doi: https://doi.org/10.13039/501100001675...
Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neural n...
Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neural n...
Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), w...
Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), w...
Artificial neural networks are promising as general function approximators but challenging to train ...
With the capacity of continual learning, humans can continuously acquire knowledge throughout their ...
Deep neural networks are used in many state-of-the-art systems for machine perception. Once a networ...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
19 pagesInternational audienceWe explore a dual-network architecture with self-refreshing memory (An...
ii Reinforcement learning (RL) problems are a fundamental part of machine learning theory, and neura...
Humans learn all their life long. They accumulate knowledge from a sequence of learning experiences ...