Learning Underwater Active Perception in Simulation
arXiv:2504.17817v2 Announce Type: replace
Abstract: When employing underwater vehicles for the autonomous inspection of assets, it is crucial to consider and assess the water conditions. These conditions significantly impact visibility and directly affect robotic operations. Turbidity can jeopardise the mission by preventing accurate visual documentation of inspected structures. Previous works have introduced methods to adapt to turbidity and backscattering, however, they also include manoeuvring and setup constraints. We propose a simple yet efficient approach to enable high-quality image acquisition of assets in a broad range of water conditions. This active perception framework includes a multi-layer perceptron (MLP) trained to predict image quality given a distance to a target and artificial light intensity. We generate a large synthetic dataset that includes ten water types with varying levels of turbidity and backscattering. For this, we modified the modelling software Blender to better account for the underwater light propagation properties. We validated the approach in simulation and demonstrate significant improvements in visual coverage and image quality compared to traditional methods. The project code is available on our project page at https://roboticimaging.org/Projects/ActiveUW/.