The literature provides many techniques to design efficient control laws to realize robotic navigation tasks. In recent years, the sensors improvement gave rise to the sensor-based control which allows to define the robotic task in the sensor space rather than in the configuration space. In this context, as cameras provide high-rate meaningful data, visual servoing has been particularly investigated, and can be used to perform various and accurate navigation tasks. This method, which relies on the interaction between the camera and the visual features motions, consists in regulating an error in the image plane. Nonetheless, vision-based navigation tasks in cluttered environment cannot be expressed as a sole regulation of visual data. Indeed...