Towards Using Case-Based Explanations as a Knowledge Foundation

Jakob Michael Schönborn, Klaus-Dieter Althoff

In: Rainer Gemulla , Simone Ponzetto , Christian Bizer , Margret Keuper , Heiner Stuckenschmidt (Hrsg.). LWDA 2018 - Lernen, Wissen, Daten, Analysen - Workshop Proceedings. GI-Workshop-Tage "Lernen, Wissen, Daten, Analysen" (LWDA-2018) August 22-24 Mannheim Germany Universität Mannheim 8/2018.


Due to the GDPR, the need of explanation-aware systems is rising. To include a component which can explain the decisions made by a given system is often not feasible or requires at least a lot of effort. On top, the user acceptance of decisions made by arti cial intelligence agents is more sceptical than welcoming. Therefore, plausible explanations have to be generated for each decision made so that the user can develop trust in the decision making process. This is important for knowledge management as well, since knowledge needs also to be trusted - otherwise the knowledge would not be reused and is therefore without value. This should be prevented by building an explanation-aware system. To guarantee the improvement of value, the incoming input from a user needs to be sanitized before stored in the case-base. The process of how knowledge can be extracted and then furthermore be used and trusted will be further investigated. The future aim is to build up a distributed case-based reasoning system which explains its own building process so that a given knowledge engineer can guide the way in which the system is building up and adjust it to his needs.

Weitere Links

paper_30.pdf (pdf, 987 KB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence