Automated Assessment of Process Modeling Exams: Basic Ideas and Prototypical ImplementationTom Thaler; Constantin Houy; Peter Fettke; Peter Loos
In: Stefanie Betz; Ulrich Reimer (Hrsg.). Modellierung 2016 - Workshopband. Workshop zur Modellierung in der Hochschullehre (MoHoL-2016), located at Modellierung 2016, March 2, Pages 63-70, Lecture Notes in Informatics (LNI), Vol. 255, ISBN 978-3-88579-649-7, Gesellschaft für Informatik (GI), Bonn, 3/2016.
The assessment of process modeling exercises and exams is a time consuming and complex task. It is desirable to give each student a detailed feedback on their solution in terms of syntactic, semantic, and pragmatic quality. It is obvious that particularly in the case of mass courses with several hundred participants, the individual grading of modeling exams by humans is challenging: Besides reliability, consistency, and validity, the efficiency of the grading process must be guaranteed. Against that background, this paper aims at developing first ideas for an automated assessment of process modeling exams. The objective is to improve modeling education in order to teach students not only to model correctly but to develop good models. Our ideas were prototypically implemented and applied in an exemplary scenario with promising results. It was possible to identify important limitations but also to derive reliable semi-automated approaches for the assessment of process modeling exams.