This paper proposes a methodology for learning joint probability estimates regarding the effect of sensorimotor features on the predicated quality of desired behavior. These relationships can then be used to choose actions that will most likely produce success. relational dependency networks are used to learn statistical models of procedural task knowledge. An example task expert for picking up objects is learned through actual experience with a humanoid robot. We believe that this approach is widely applicable and has great potential to allow a robot to autonomously determine which features in the world are salient and should be used to recommend policy for action
Abstract To successfully manipulate novel objects, robots must rst acquire information about the ob...
Abstract—The joint attention is an important cognitive function that human beings learn in childhood...
Abstract. We present a robot agent that learns to exploit objects in its environment as tools, allow...
Abstract — The outcome of many complex manipulation ac-tions is contingent on the spatial relationsh...
Probabilistic planners have improved recently to the point that they can solve difficult tasks with ...
We present initial results of an application of statistical relational learning using ProbLog to a r...
International audienceProbabilistic planners have improved recently to the point that they can solve...
This thesis presents new approaches toward efficient and intuitive high-level plan learning for coop...
For robots acting in human-centered environments, the ability to improve based on experience is esse...
In this dissertation, we investigate learning by observation , a machine learning approach to create...
Affordances define the action possibilities on an object in the environment and in robotics they pla...
Autonomous agents that act in the real world utilizing sensory input greatly rely on the ability to ...
Institute of Perception, Action and BehaviourRecently there has been a good deal of interest in usin...
Abstract. One of the key problems in model-based reinforcement learn-ing is balancing exploration an...
Endowing robots with human-like physical reasoning abilities remains challenging. We argue that exis...
Abstract To successfully manipulate novel objects, robots must rst acquire information about the ob...
Abstract—The joint attention is an important cognitive function that human beings learn in childhood...
Abstract. We present a robot agent that learns to exploit objects in its environment as tools, allow...
Abstract — The outcome of many complex manipulation ac-tions is contingent on the spatial relationsh...
Probabilistic planners have improved recently to the point that they can solve difficult tasks with ...
We present initial results of an application of statistical relational learning using ProbLog to a r...
International audienceProbabilistic planners have improved recently to the point that they can solve...
This thesis presents new approaches toward efficient and intuitive high-level plan learning for coop...
For robots acting in human-centered environments, the ability to improve based on experience is esse...
In this dissertation, we investigate learning by observation , a machine learning approach to create...
Affordances define the action possibilities on an object in the environment and in robotics they pla...
Autonomous agents that act in the real world utilizing sensory input greatly rely on the ability to ...
Institute of Perception, Action and BehaviourRecently there has been a good deal of interest in usin...
Abstract. One of the key problems in model-based reinforcement learn-ing is balancing exploration an...
Endowing robots with human-like physical reasoning abilities remains challenging. We argue that exis...
Abstract To successfully manipulate novel objects, robots must rst acquire information about the ob...
Abstract—The joint attention is an important cognitive function that human beings learn in childhood...
Abstract. We present a robot agent that learns to exploit objects in its environment as tools, allow...