Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time for dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show ...
Data inefficiency is one of the major challenges for deploying deep reinforcement learning algorithm...
This thesis proposes some new answers to an old question - how can artificially intelligent agents e...
Reinforcement Learning (RL) has seen exponential performance improvements over the past decade, achi...
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope...
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope...
Recently, sparse training methods have started to be established as a de facto approach for training...
Sparse neural networks have been widely applied to reduce the necessary resource requirements to tra...
We investigate sparse representations for control in reinforcement learning. While these representat...
Sparse training has received an upsurging interest in machine learning due to its tantalizing saving...
Although Deep Reinforcement Learning (DRL) has been popular in many disciplines including robotics, ...
The reinforcement learning (RL) problem is rife with sources of non-stationarity, making it a notori...
In this paper, we introduce a new perspective on training deep neural networks capable of state-of-t...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Reinforcement learning (RL) is a broad family of algorithms for training autonomous agents to collec...
One of the most challenging types of environments for a Deep Reinforcement Learning agent to learn i...
Data inefficiency is one of the major challenges for deploying deep reinforcement learning algorithm...
This thesis proposes some new answers to an old question - how can artificially intelligent agents e...
Reinforcement Learning (RL) has seen exponential performance improvements over the past decade, achi...
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope...
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope...
Recently, sparse training methods have started to be established as a de facto approach for training...
Sparse neural networks have been widely applied to reduce the necessary resource requirements to tra...
We investigate sparse representations for control in reinforcement learning. While these representat...
Sparse training has received an upsurging interest in machine learning due to its tantalizing saving...
Although Deep Reinforcement Learning (DRL) has been popular in many disciplines including robotics, ...
The reinforcement learning (RL) problem is rife with sources of non-stationarity, making it a notori...
In this paper, we introduce a new perspective on training deep neural networks capable of state-of-t...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Reinforcement learning (RL) is a broad family of algorithms for training autonomous agents to collec...
One of the most challenging types of environments for a Deep Reinforcement Learning agent to learn i...
Data inefficiency is one of the major challenges for deploying deep reinforcement learning algorithm...
This thesis proposes some new answers to an old question - how can artificially intelligent agents e...
Reinforcement Learning (RL) has seen exponential performance improvements over the past decade, achi...