Skip to main content Skip to main navigation


Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring

Nijat Mehdiyev; Peter Fettke
In: Witold Pedrycz; Shyi-Ming Chen (Hrsg.). Interpretable Artificial Intelligence: A Perspective of Granular Computing. Chapter 1, Pages 1-28, Vol. 937, ISBN 9783030649487, Springer, 2021.


The contemporary process-aware information systems possess the capabilities to record the activities generated during the process execution. To leverage these process-specific fine-granular data, process mining has recently emerged as a promising research discipline. As an important branch of process mining, predictive business process management pursues the objective to generate forward-looking, predictive insights to shape business processes. In this study, we propose a conceptual framework sought to establish and promote understanding of decision-making environment, underlying business processes, and nature of the user characteristics for developing explainable business process prediction solutions. Consequently, with regard to the theoretical and practical implications of the framework, this study proposes a novel local post-hoc explanation approach for a deep learning classifier that is expected to facilitate the domain experts in justifying the model decisions. In contrary to alternative popular perturbation-based local explanation approaches, this study defines the local regions from the validation dataset by using the intermediate latent space representations learned by the deep neural networks. To validate the applicability of the proposed explanation method, the real-life process log data delivered by Volvo IT Belgium’s incident management system are used. The adopted deep learning classifier achieves good performance with the Area Under the ROC Curve of 0.94. The generated local explanations are also visualized and presented with relevant evaluation measures that are expected to increase the users’ trust in the black-box model.