Understanding understandability of conceptual models: What are we actually talking about?

Constantin Houy, Peter Fettke, Peter Loos

In: Paolo Atzeni, David Cheung, Sudha Ram (Hrsg.). Conceptual Modeling – ER 2012. International Conference on Conceptual Modeling (ER-12) October 15-18 Florence Italy Seiten 64-77 Lecture Notes in Computer Science (LNCS) 7532 Springer Berlin 10/2012.


Investigating and improving the quality of conceptual models has gained tremendous importance in the past years. In general, model understandability is regarded one of the most important model quality goals and criteria. A considerable amount of empirical studies, especially experiments, have been conducted in order to investigate factors influencing the understandability of conceptual models. However, a thorough review and reconstruction of 42 ex-periments on conceptual model understandability conducted in this research shows that there is a variety of different understandings and conceptualizations of the term model understandability. As a consequence, this term remains ambiguous, research results on model understandability are hardly comparable and partly imprecise, which shows the necessity of clarification what the conceptual modeling community is actually talking about when the term model understandability is used. In order to overcome this shortcoming, our research classifies the different observed dimensions of model understandability in a reference framework. Moreover, implications of the findings are presented and discussed and some guidelines for future model understandability research are given.


Weitere Links

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence