In this study, a novel end-to-end path planning algorithm based on deep reinforcement learning is proposed for aerial robots deployed in dense environments. The learning agent finds an obstacle-free way around the provided rough, global path by only depending on the observations from a forward-facing depth camera. A novel deep reinforcement learning framework is proposed to train the end-to-end policy with the capability of safely avoiding obstacles. The Webots open-source robot simulator is utilized for training the policy, introducing highly randomized environmental configurations for better generalization. The training is performed without dynamics calculations through randomized position updates to minimize the amount of data processed....
This paper presents a framework for UAV navigation in indoor environments using a deep reinforcement...
Reinforcement learning is a model-free technique to solve decision-making problems by learning the b...
Mobile robots must operate autonomously, often in unknown and unstructured environments. To achieve ...
The use of multi-rotor UAVs in industrial and civil applications has been extensively encouraged by ...
Motion planning of robots in real world is challenging due to the uncertainty in environments and ro...
Robots that autonomously navigate real-world 3D cluttered environments need to safely traverse terra...
Reliable indoor navigation in the presence of dynamic obstacles is an essential capability for mobil...
An important challenge for air–ground unmanned systems achieving autonomy is navigation, which is es...
Robotic agents are becoming more prevalent in many settings, and their use in unstructured environme...
This work presents a pipeline for autonomous emergency landing for multicopters, such as rotary wing...
Visual navigation is essential for many applications in robotics, from manipulation, through mobile ...
To achieve the perception-based autonomous control of UAVs, schemes with onboard sensing and computi...
Safe navigation in a cluttered environment is a key capability for the autonomous operation of Micro...
Search and Rescue (SAR) missions represent an important challenge in the robotics research field as ...
In this thesis, two deep learning-based path planning methods for autonomous exploration of subterra...
This paper presents a framework for UAV navigation in indoor environments using a deep reinforcement...
Reinforcement learning is a model-free technique to solve decision-making problems by learning the b...
Mobile robots must operate autonomously, often in unknown and unstructured environments. To achieve ...
The use of multi-rotor UAVs in industrial and civil applications has been extensively encouraged by ...
Motion planning of robots in real world is challenging due to the uncertainty in environments and ro...
Robots that autonomously navigate real-world 3D cluttered environments need to safely traverse terra...
Reliable indoor navigation in the presence of dynamic obstacles is an essential capability for mobil...
An important challenge for air–ground unmanned systems achieving autonomy is navigation, which is es...
Robotic agents are becoming more prevalent in many settings, and their use in unstructured environme...
This work presents a pipeline for autonomous emergency landing for multicopters, such as rotary wing...
Visual navigation is essential for many applications in robotics, from manipulation, through mobile ...
To achieve the perception-based autonomous control of UAVs, schemes with onboard sensing and computi...
Safe navigation in a cluttered environment is a key capability for the autonomous operation of Micro...
Search and Rescue (SAR) missions represent an important challenge in the robotics research field as ...
In this thesis, two deep learning-based path planning methods for autonomous exploration of subterra...
This paper presents a framework for UAV navigation in indoor environments using a deep reinforcement...
Reinforcement learning is a model-free technique to solve decision-making problems by learning the b...
Mobile robots must operate autonomously, often in unknown and unstructured environments. To achieve ...