Skip to main content Skip to main navigation


On Explanations for Hybrid Artificial Intelligence

Lars Nolle; Frederic Theodor Stahl; Tarek Elmihoub
In: Max Bramer; Frederic Theodor Stahl (Hrsg.). Artificial Intelligence XL - 43rd SGAI International Conference on Artificial Intelligence, AI 2023. SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence (AI-2023), December 12-14, Cambridge, United Kingdom, Pages 3-15, Lecture Notes in Computer Science (LNCS), Vol. 14381, Springer Nature Switzerland AG, Cham, 12/2023.


The recent developments of machine learning (ML) approaches within artificial intelligence (AI) systems often require explainability of ML models. In order to establish trust in these systems, for example in safety critical applications, a number of different explainable artificial intelligence (XAI) methods have been proposed, either post-hoc or intrinsic models. These can help to understand why a ML model has made a particular decision. The authors of this paper point out that the abbreviation XAI is commonly used in the literature referring to explainable ML models, although the term AI encompasses many more topics than ML. To improve efficiency and effectiveness of AI, two or more AI subsystems are often combined to solve a common problem. In this case, an overall explanation has to be derived from the subsystems’ explanations. In this paper we define the term hybrid AI. This is followed by reviewing the current state of XAI before proposing the use of blackboard systems (BBS) to not only share results but also to integrate and to exchange explanations of different XAI models as well, in order to derive an overall explanation for hybrid AI systems.

Weitere Links