This thesis introduces the use of Machine Learning, specifically Reinforcement Learning, to create a model-free tracking property for Remotely Operated Vehicles (ROV). In detail, the ROV is trained by a RL algorithm to track an aruco marker, using online implementation of a Computer Vision (CV) algorithm as a detection property. The main motivation behind this enterprise is the contribution to increased autonomy in underwater operations, by introducing model-free autonomous tracking behavior to underwater vehicles. This approach of implementation requires minimal human intervention during operation, while significantly reducing prior human control programming effort. Firstly, a simulator based tracking behavior training of the ROV was done ...
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dy...
Deep Reinforcement Learning (DRL) methods are increasingly being applied in Unmanned Underwater Vehi...
Underwater vehicles are employed in the exploration of dynamic environments where tuning of a specif...
Deep Reinforcement Learning methods for Underwater target Tracking This is a set of tools developed...
At the Australian National University we are developing an autonomous underwater vehicle for explora...
Autonomous underwater vehicles (AUVs) are widely used to accomplish various missions in the complex ...
To realize the potential of autonomous underwater robots that scale up our observational capacity in...
We present a reinforcement learning-based (RL) control scheme for trajectory tracking of fully-actua...
This paper proposes a field application of a high-level reinforcement learning (RL) control system f...
ICM-CRM Meeting 2023: New Bridges between Marine Sciences and Mathematics, 2-10 November 2023Reinfor...
International audienceThe marine environment is a hostile setting for robotics. It is strongly unstr...
Due to the unknown motion model and the complexity of the environment, the problem of target trackin...
18th International Conference on Automation Science and Engineering (CASE), 20-24 August 2022.-- 8 p...
In this study, we present a platform-portable deep reinforcement learning method that has been used ...
Summary. We present an application of the ensemble learning algorithm in the area of visual tracking...
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dy...
Deep Reinforcement Learning (DRL) methods are increasingly being applied in Unmanned Underwater Vehi...
Underwater vehicles are employed in the exploration of dynamic environments where tuning of a specif...
Deep Reinforcement Learning methods for Underwater target Tracking This is a set of tools developed...
At the Australian National University we are developing an autonomous underwater vehicle for explora...
Autonomous underwater vehicles (AUVs) are widely used to accomplish various missions in the complex ...
To realize the potential of autonomous underwater robots that scale up our observational capacity in...
We present a reinforcement learning-based (RL) control scheme for trajectory tracking of fully-actua...
This paper proposes a field application of a high-level reinforcement learning (RL) control system f...
ICM-CRM Meeting 2023: New Bridges between Marine Sciences and Mathematics, 2-10 November 2023Reinfor...
International audienceThe marine environment is a hostile setting for robotics. It is strongly unstr...
Due to the unknown motion model and the complexity of the environment, the problem of target trackin...
18th International Conference on Automation Science and Engineering (CASE), 20-24 August 2022.-- 8 p...
In this study, we present a platform-portable deep reinforcement learning method that has been used ...
Summary. We present an application of the ensemble learning algorithm in the area of visual tracking...
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dy...
Deep Reinforcement Learning (DRL) methods are increasingly being applied in Unmanned Underwater Vehi...
Underwater vehicles are employed in the exploration of dynamic environments where tuning of a specif...