Publication
Autonomous underwater vehicle link alignment control in unknown environments using reinforcement learning
Yang Weng; Sehwa Chun; Masaki Ohashi; Takumi Matsuda; Yuki Sekimori; Joni Pajarinen; Jan Peters; Toshihiro Maki
In: Journal of Field Robotics (JFR), Vol. 41, No. 6, Pages 1724-1743, Wiley Online Library, 2024.
Abstract
High‐speed underwater wireless optical communication holds immense promise in
ocean monitoring and surveys, providing crucial support for the real‐time sharing of
observational data collected by autonomous underwater vehicles (AUVs). However,
due to inaccurate target information and external interference in unknown
environments, link alignment is challenging and needs to be addressed. In response
to these challenges, we propose a reinforcement learning‐based alignment method
to control the AUV to establish an optical link and maintain alignment. Our alignment
control system utilizes a combination of sensors, including a depth sensor, Doppler
velocity log (DVL), gyroscope, ultra‐short baseline device, and acoustic modem.
These sensors are used in conjunction with a particle filter to observe the
environment and estimate the AUV's state accurately. The soft actor‐critic algorithm
is used to train a reinforcement learning‐based controller in a simulated environment
to reduce pointing errors and energy consumption in alignment. After experimental
validation in simulation, we deployed the controller on an actual AUV called
Tri‐TON. In experiments at sea, Tri‐TON maintained the link and angular pointing
errors within 1 m and ∘10 , respectively. Experimental results demonstrate that the
proposed alignment control method can establish underwater optical communica-
tion between AUV fleets, thus improving the efficiency of marine surveys.
