One perspective for artificial intelligence research is to build machines that perform tasks autonomously in our complex everyday environments. This setting poses challenges to the development of perception skills: A robot should be able to perceive its location and objects in its surrounding, while the objects and the robot itself could also be moving. Objects may not only be composed of rigid parts, but could be non-rigidly deformable or appear in a variety of similar shapes. Furthermore, it could be relevant to the task to observe object semantics. For a robot acting fluently and immediately, these perception challenges demand efficient methods. This theses presents novel approaches to robot perception with RGB-D sensors. It develops ef...
Machine learning methods and object recognition algorithms have improved much in the past decade, bu...
Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve ...
We explore how we can use synthetically generated RGB-D training data from a near photo-realistic ga...
One perspective for artificial intelligence research is to build machines that perform tasks autonom...
Autonomous robots operating in unstructured real-world settings cannot rely on an a priori map of th...
Humans understand the world through vision without much effort. We perceive the structure, objects, ...
In order to safely and effectively operate in real-world unstructured environments where a priori kn...
We present an interactive perception system that enables an autonomous agent to deliberately interac...
The availability of RGB-D (Kinect-like) cameras has led to an explosive growth of research on robot ...
Abstract—In this paper, we present a system for automatically learning segmentations of objects give...
Robotic systems have shown impressive results at navigating in previously mapped areas, in particula...
For interaction with its environment, a robot is required to learn models of objects and to perceive...
This article describes interactive object segmentation for autonomous service robots acting in human...
We propose a real-time approach to learn semantic maps from moving RGB-D cameras. Our method models ...
Abstract — We present an unsupervised framework for simul-taneous appearance-based object discovery,...
Machine learning methods and object recognition algorithms have improved much in the past decade, bu...
Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve ...
We explore how we can use synthetically generated RGB-D training data from a near photo-realistic ga...
One perspective for artificial intelligence research is to build machines that perform tasks autonom...
Autonomous robots operating in unstructured real-world settings cannot rely on an a priori map of th...
Humans understand the world through vision without much effort. We perceive the structure, objects, ...
In order to safely and effectively operate in real-world unstructured environments where a priori kn...
We present an interactive perception system that enables an autonomous agent to deliberately interac...
The availability of RGB-D (Kinect-like) cameras has led to an explosive growth of research on robot ...
Abstract—In this paper, we present a system for automatically learning segmentations of objects give...
Robotic systems have shown impressive results at navigating in previously mapped areas, in particula...
For interaction with its environment, a robot is required to learn models of objects and to perceive...
This article describes interactive object segmentation for autonomous service robots acting in human...
We propose a real-time approach to learn semantic maps from moving RGB-D cameras. Our method models ...
Abstract — We present an unsupervised framework for simul-taneous appearance-based object discovery,...
Machine learning methods and object recognition algorithms have improved much in the past decade, bu...
Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve ...
We explore how we can use synthetically generated RGB-D training data from a near photo-realistic ga...