Skip to main content Skip to main navigation


A Deep Learning Framework for Semantic Segmentation of Underwater Environments

Amos Smith; Jeremy Coffelt; Kai Lingemann
In: OCEANS 22 Hampton Roads. OCEANS MTS/IEEE Conference (OCEANS-2022), October 17-20, Hampton Roads, VA, USA, IEEE, 10/2022.


Perception tasks such as object classification and segmentation are crucial to the operation of underwater robotics missions like bathymetric surveys and infrastructure inspections. Marine robots in these applications typically use a combination of laser scanner, camera, and sonar sensors to generate images and point clouds of the environment. Traditional perception approaches often struggle to overcome water turbidity, light attenuation, marine snow, and other harsh conditions of the underwater world. Deep learning-based perception techniques have proven capable of overcoming such difficulties, but are often limited by the availability of relevant training data. In this paper, we propose a framework that consists of procedural creation of randomized underwater pipeline environment scenes, the generation of corresponding point clouds with semantic labels, and the training of a 3D segmentation network using the synthetic data. The resulting segmentation network is analyzed on real underwater point cloud data and compared with a traditional baseline approach.


Weitere Links