Robot grasping depends on the specific manipulation scenario: the object, its properties, task and grasp constraints. Object-task affordances facilitate semantic reasoning about pre-grasp configurations with respect to the intended tasks, favoring good grasps. We employ probabilistic rule learning to recover such object-task affordances for task-dependent grasping from realistic video data.status: publishe
Affordances are used in robotics to model action opportunities of a robotic manipulator on an object...
We describe a system for autonomous learning of visual object representations and their grasp afford...
Robots acting in everyday environments need a good knowledge of how a manipulation action can affect...
Robot grasping depends on the specific manipulation scenario: the object, its properties, task and g...
Affordances define the action possibilities on an object in the environment and in robotics they pla...
We present initial results of an application of statistical relational learning using ProbLog to a r...
We address the issue of learning and representing object grasp affordance models. We model grasp aff...
Abstract — We present a method for learning object grasp affordance models in 3D from experience, an...
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gr...
In this paper, we present an affordance learning system for robotic grasping. The system involves th...
We develop means of learning and representing object grasp affordances probabilistically. By grasp a...
Service robots are expected to autonomously and efficiently work in human-centric environments. For ...
This paper presents a novel object–object affordance learning approach that enables intelligent robo...
The concept of affordances has been used in robotics to model action opportunities of a robot and as...
This paper describes a prototype robot grasping system that uses human grasping synergies and a self...
Affordances are used in robotics to model action opportunities of a robotic manipulator on an object...
We describe a system for autonomous learning of visual object representations and their grasp afford...
Robots acting in everyday environments need a good knowledge of how a manipulation action can affect...
Robot grasping depends on the specific manipulation scenario: the object, its properties, task and g...
Affordances define the action possibilities on an object in the environment and in robotics they pla...
We present initial results of an application of statistical relational learning using ProbLog to a r...
We address the issue of learning and representing object grasp affordance models. We model grasp aff...
Abstract — We present a method for learning object grasp affordance models in 3D from experience, an...
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gr...
In this paper, we present an affordance learning system for robotic grasping. The system involves th...
We develop means of learning and representing object grasp affordances probabilistically. By grasp a...
Service robots are expected to autonomously and efficiently work in human-centric environments. For ...
This paper presents a novel object–object affordance learning approach that enables intelligent robo...
The concept of affordances has been used in robotics to model action opportunities of a robot and as...
This paper describes a prototype robot grasping system that uses human grasping synergies and a self...
Affordances are used in robotics to model action opportunities of a robotic manipulator on an object...
We describe a system for autonomous learning of visual object representations and their grasp afford...
Robots acting in everyday environments need a good knowledge of how a manipulation action can affect...