Learning a motor skill task with Reinforcement Learning still takes a long time. A way to speed up the learning process with- out using much prior knowledge is to use sub-goals. In this study, the use of subgoals decreased the learning time by a factor nine and we show that tests on a real robot give similar results. The price to be paid, in case the subgoals do not lie on the optimal path, is a worse end performance. Hierarchical greedy execution can (partially) cancel out this problem. For future work, we suggest the use of a method which is able to obtain optimal performance.BMD/BioroboticsBioMechanical EngineeringMechanical, Maritime and Materials Engineerin
Mobile robots are increasingly being employed for performing complex tasks in dynamic environments. ...
The Delft Biorobotics Laboratory develops bipedal humanoid robots. One of these robots, called LEO, ...
Human knowledge can reduce the number of iterations required to learn in reinforcement learning. Tho...
Reinforcement learning is a way to learn control tasks by trial and error. Even for simple motor con...
Solving obstacle-clustered robotic navigation tasks via model-free reinforcement learning (RL) is ch...
Autonomous systems are often difficult to program. Reinforcement learning (RL) is an attractive alte...
An ability to adjust to changing environments and unforeseen circumstances is likely to be an import...
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowle...
The capacity of re-using previously acquired skills can greatly enhance robots ’ learn-ing speed and...
This paper presents a new method for the autonomous construction of hierarchical action and state re...
UnrestrictedAutonomous robots have been a long standing vision of robotics, artificial intelligence,...
For robots to perform tasks in the unstructured environments of the real world, they must be able to...
Machines (HAM) (Parr, 1998) and the MAXQ approach (Dietterich, 2000). They are all based on the noti...
This book presents the state of the art in reinforcement learning applied to robotics both in terms ...
Following the principle of human skill learning, robot acquiring skill is a process similar to human...
Mobile robots are increasingly being employed for performing complex tasks in dynamic environments. ...
The Delft Biorobotics Laboratory develops bipedal humanoid robots. One of these robots, called LEO, ...
Human knowledge can reduce the number of iterations required to learn in reinforcement learning. Tho...
Reinforcement learning is a way to learn control tasks by trial and error. Even for simple motor con...
Solving obstacle-clustered robotic navigation tasks via model-free reinforcement learning (RL) is ch...
Autonomous systems are often difficult to program. Reinforcement learning (RL) is an attractive alte...
An ability to adjust to changing environments and unforeseen circumstances is likely to be an import...
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowle...
The capacity of re-using previously acquired skills can greatly enhance robots ’ learn-ing speed and...
This paper presents a new method for the autonomous construction of hierarchical action and state re...
UnrestrictedAutonomous robots have been a long standing vision of robotics, artificial intelligence,...
For robots to perform tasks in the unstructured environments of the real world, they must be able to...
Machines (HAM) (Parr, 1998) and the MAXQ approach (Dietterich, 2000). They are all based on the noti...
This book presents the state of the art in reinforcement learning applied to robotics both in terms ...
Following the principle of human skill learning, robot acquiring skill is a process similar to human...
Mobile robots are increasingly being employed for performing complex tasks in dynamic environments. ...
The Delft Biorobotics Laboratory develops bipedal humanoid robots. One of these robots, called LEO, ...
Human knowledge can reduce the number of iterations required to learn in reinforcement learning. Tho...