Publikation
Explanation in Bio-inspired Computing: Towards Understanding of AI Systems
Rolf Drechsler; Christina Plump; Bernhard Berger
In: Proceedings of the 1st International Conference on Artificial Intelligence for Computing, Astronomy, and Renewable Energy (AICARE). International Conference on Artificial Intelligence for Computing, Astronomy, and Renewable Energy (AICARE-2025), November 21-22, Kolkata, India, IEEE Xplore, 2025.
Zusammenfassung
Artificial intelligence methods and applications have
recently seen a massive surge, partially caused by the success of
neural networks in areas like image classification and LLMs for
generating near-perfect natural-language texts. Unnoticed by the
public, but highly important for many AI methods to function,
bio-inspired optimisation techniques have also seen a rising usage.
However, the more techniques are used, the more complex the
explainability decreases. Even developers of Neural Networks
can seldom state why the Neural Network’s results are what
it is. The explainability of AI methods, as well as systems in
general, is, however, essential for safety, security reasons, and
to gain and maintain trust with system users. While research
in explainability has therefore gained significant traction with
prominent AI methods, such as neural networks, bio-inspired
optimisation techniques have seen less research in this regard.
The complexity of explainability with these algorithms lies in the
use of populations and randomness. We present an approach to
track individuals in bio-inspired optimisation techniques, aiming
to improve our understanding of the quality of results from
such optimisation algorithms. To that end, we introduce a data
model, include this model in the standard implementations of
these approaches, and provide a visualisation that allows for
understanding the relational information of these individuals,
yielding more insight into these optimisation techniques and
providing a first step toward improved explainability.
