Skip to main content Skip to main navigation


Comparative Quantitative Evaluation of Distributed Methods for Explanation Generation and Validation of Floor Plan Recommendations

Christian Espinoza-Stapelfeld; Viktor Eisenstadt; Klaus-Dieter Althoff
In: Jaap van den Herik; Ana Paula Rocha (Hrsg.). Agents and Artificial Intelligence. International Conference on Agents and Artificial Intelligence (ICAART), Cham, Pages 46-63, ISBN 978-3-030-05453-3, Springer International Publishing, 2019.


In this work, we compare different explanation generation and validation methods for semantic search pattern-based retrieval results returned by a case-based framework for support of early conceptual design phases in architecture. Compared methods include two case- and rule-based explanation engines, the third one is the discriminant analysis-based method for explanation and validation prediction and estimation. All of the explanation methods use the same data set for retrieval and subsequent explainability operations for results. We describe the main structure of each method and evaluate their quantitative validation performance against each other. The goal of this work is to examine which method performs better under which circumstances, at which point in time, and how good the potential explanation ant its validation can be predicted in general. To evaluate these issues, we compare not only the general performance, i.e., the average rate of valid explanations but also how the validation rate changes over time using a number of time steps for this comparison. We also show for which search pattern type which methods perform better.

Weitere Links