Publication
Evaluation of Explanations for Object Detection Using Transformers with Sonar Data
Christoph Manss; Tarek Elmihoub
In: Artificial Intelligence XLII, 45th SGAI International Conference on Artificial Intelligence, AI 2025. SGAI International Conference on Artificial Intelligence (AI-2025), December 16-18, Cambridge, United Kingdom, LNAI (LNAI), Vol. 16302, Springer Nature Switzerland, Cham, 12/2025.
Abstract
For underwater object detection, sonar imagery generated from Forward Looking Sonar (FLS) and Side Scan Sonar (SSS) systems is usually inspected by experts while AI algorithms are currently under investigation. The transparency and reasoning of AI-based object detection models are crucial for users of these detectors. However, the black-box nature of AI-based detectors limits their interpretability. For a few years, transformer networks have been employed more extensively for object detection and are also used together with sonar data. This paper investigates transformer models for object detection on sonar data and evaluates explanations for the detected objects. We test multiple transformer object detectors, which are evaluated on two datasets. One dataset is a FLS dataset and one is a SSS dataset. After training the transformer models, attribution maps are generated and used for explainability. Here, two XAI methods are used for post-hoc explanations for object detection. The post-hoc explanations are evaluated based on eXplainable Artificial Intelligence (XAI) metrics considering faithfulness and localisation. It turns out that transformer networks with standard backbones provide reasonable accuracies, together with useful attribution maps if used for sonar data. However, multiple XAI methods have to be used to get good explanations.
