Skip to main content Skip to main navigation

Hannover Messe 2019

XAI 4.0 – Explainable Artificial Intelligence for Industrie 4.0

MES als Hebel für KI in der Produktion

AI-based decision aids help subject matter experts as they make judgments in the context of their operational activities, especially, when complex information and systems are involved. The targeted use of data-driven decision making can lead to significant productivity gains in the manufacturing sector – assuming a successful operationalization and embedding of the cognitive insights into the business processes takes place. Such integration requires a change management process, which builds trust in the actions, inference mechanisms, and results provided by the deployed AI systems.

Although AI models are becoming increasingly precise, their "black-box-character" still poses a major obstacle for practical acceptance and use: So far, they provide little explanation to the experts about how the results and recommendations are reached. However, for a smooth deployment of the AI systems and their acceptance by the experts, it is essential that results be comprehensible, in other words, explainable. Explainability is seen as a way to increase user confidence in the models. Nevertheless, explainability is not a clearly defined term: it encompasses many different dimensions and goals. The quality and adequacy largely depend on the situational context of the decision and the characteristics of the user.

Findings from the cognitive sciences seem to confirm that while promoting understanding of the internal mechanisms of ML models for data scientist/engineers is very important, it is also accompanied by high cognitive stress even to the point of excessive demands. Post-hoc approaches to explanations rarely clarify how the model arrived at a conclusion, but they do provide useful information for the user and end customer of the ML systems.

XAI 4.0 demonstrates numerous machine learning post-hoc explanations for industrial use cases such as explanations with local and global surrogate models or case-related visual as well as counterfactual explanations where the target audience consists of subject matter experts.

Model-agnostic – as well as model-specific approaches to explanations – are presented, and hidden layer activations in deep neural networks are used to gain an understanding of the decisions of deep learning systems.

Contact

Prof. Dr. Peter Fettke | Nijat Mehdiyev
DFKI Research Department Institute for Information Systems (IWi)


Phone: +49 681 85775 3106