Robot localization is a mandatory ability for the robot to navigate the world. Solving the SLAM (Simultaneous Localization and Mapping) allows the robot to both localize itself in the environment while building a map of its surrounding. Vision-based SLAM uses one or more camera as the main source of information. The SLAM involves a large computation load on its own and using vision involves even more complexity that does not scale well. This increasing complexity makes it hard to solve in real-time for applications where the SLAM high rate and low latency are inherent constraints (Advanced Drivers Assistance Systems). To help robots solve the SLAM in real-time we propose to build a vision-core that aims at processing the pixel stream coming...