Shortcomings of reinforcement learning for robot control include the sparsity of the environmental reward function, the high number of trials required before reaching an efficient action policy and the reliance on exploration to gather information about the environment, potentially resulting in undesired actions. These limits can be overcome by adding a human in the loop to provide additional information during the learning phase. In this paper, we propose a novel way to combine human inputs and reinforcement by following the Supervised Progressively Autonomous Robot Competencies (SPARC) approach. We compare this method to the principles of Interactive Reinforcement Learning as proposed by Thomaz and Breazeal. Results from a study involving...
This paper aims at proposing a general framework of shared control for human-robot interaction. Huma...
Making robot technology accessible to general end-users promises numerous benefits for all aspects o...
In this article we describe a novel algorithm that allows fast and continuous learning on a physical...
© 2017 When a robot is learning it needs to explore its environment and how its environment responds...
When a robot is learning it needs to explore its environment and how its environment responds on its...
Social interacting is a complex task for which machine learning holds particular promise. However, a...
Traditionally the behaviour of social robots has been programmed. However, increasingly there has be...
Striking the right balance between robot autonomy and human control is a core challenge in social ro...
The Wizard-of-Oz robot control methodology is widely used and typically places a high burden of effo...
Keeping a human in a robot learning cycle can provide many advantages to improve the learning proces...
AbstractWhile Reinforcement Learning (RL) is not traditionally designed for interactive supervisory ...
Robots are extending their presence in domestic environments every day, it being more common to see ...
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinfor...
Abstract. The Wizard-of-Oz robot control methodology is widely used and typically places a high burd...
Early literacy and language skills are a significant precursor to children’s later educational succe...
This paper aims at proposing a general framework of shared control for human-robot interaction. Huma...
Making robot technology accessible to general end-users promises numerous benefits for all aspects o...
In this article we describe a novel algorithm that allows fast and continuous learning on a physical...
© 2017 When a robot is learning it needs to explore its environment and how its environment responds...
When a robot is learning it needs to explore its environment and how its environment responds on its...
Social interacting is a complex task for which machine learning holds particular promise. However, a...
Traditionally the behaviour of social robots has been programmed. However, increasingly there has be...
Striking the right balance between robot autonomy and human control is a core challenge in social ro...
The Wizard-of-Oz robot control methodology is widely used and typically places a high burden of effo...
Keeping a human in a robot learning cycle can provide many advantages to improve the learning proces...
AbstractWhile Reinforcement Learning (RL) is not traditionally designed for interactive supervisory ...
Robots are extending their presence in domestic environments every day, it being more common to see ...
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinfor...
Abstract. The Wizard-of-Oz robot control methodology is widely used and typically places a high burd...
Early literacy and language skills are a significant precursor to children’s later educational succe...
This paper aims at proposing a general framework of shared control for human-robot interaction. Huma...
Making robot technology accessible to general end-users promises numerous benefits for all aspects o...
In this article we describe a novel algorithm that allows fast and continuous learning on a physical...