Mobile robots are increasingly being used in high-risk rough terrain situations, such as reconnaissance, planetary exploration, safety and rescue applications. Conventional localization algorithms are not well suited to rough terrain, since sensor drift and the dynamic effects occurring at wheel-terrain interface, such as slipping and sinkage, largely compromise their accuracy. In this paper, we follow a novel approach for 6-DoF ego-motion estimation, using stereovision. It integrates image intensity information and 3D stereo data within an Iterative Closest Point (ICP) scheme. Neither a-priori knowledge of the motion and the terrain properties nor inputs from other sensors are required, while the only assumption is that the scene always co...
Vision-based motion estimation is an effective means for mobile robot localization and is often used...
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image informati...
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image informati...
External perception based on vision plays a critical role in developing improved and robust localiza...
External perception based on vision plays a critical role in developing improved and robust localiza...
In order for unmanned vehicles to be able to successfully accomplish the planned task in high-risk r...
Abstract—A navigation algorithm for mobile robots in un-known rough terrain has been developed. The ...
Space robotic systems have been playing a crucial role in planetary exploration missions, expanding ...
The intent of this paper is to show how the accuracy of 3D position tracking can be improved by cons...
In this paper, we propose a novel, efficient stereo visual-odometry algorithm for ground vehicles mo...
This paper presents a method that estimates a robot displacements in outdoor unstructured terrain. I...
Abstract: Mobile robots are characterised by their capacity to move autonomously in an environment t...
Future outdoor mobile robots will have to explore larger and larger areas, performing difficult task...
13th International Conference on Autonomous Robot Systems (Robotica), 2013, LisboaVisual Odometry is...
Here we present a robust method for monocular visual odometry capable of accurate position estimatio...
Vision-based motion estimation is an effective means for mobile robot localization and is often used...
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image informati...
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image informati...
External perception based on vision plays a critical role in developing improved and robust localiza...
External perception based on vision plays a critical role in developing improved and robust localiza...
In order for unmanned vehicles to be able to successfully accomplish the planned task in high-risk r...
Abstract—A navigation algorithm for mobile robots in un-known rough terrain has been developed. The ...
Space robotic systems have been playing a crucial role in planetary exploration missions, expanding ...
The intent of this paper is to show how the accuracy of 3D position tracking can be improved by cons...
In this paper, we propose a novel, efficient stereo visual-odometry algorithm for ground vehicles mo...
This paper presents a method that estimates a robot displacements in outdoor unstructured terrain. I...
Abstract: Mobile robots are characterised by their capacity to move autonomously in an environment t...
Future outdoor mobile robots will have to explore larger and larger areas, performing difficult task...
13th International Conference on Autonomous Robot Systems (Robotica), 2013, LisboaVisual Odometry is...
Here we present a robust method for monocular visual odometry capable of accurate position estimatio...
Vision-based motion estimation is an effective means for mobile robot localization and is often used...
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image informati...
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image informati...