DFKI-LT - Dissertation SeriesVol. XXXIV
Mark Buckley: Modelling solution step discussions in tutorial dialogue
price: € 16
This thesis is concerned with intelligent tutoring systems with natural language interfaces, which combine research from the fields of dialogue modelling and pedagogical science. Their development is motivated by the discovery that the learning gains achieved by human one-on-one tutoring are much higher than those achieved by traditional classroom instruction. The mechanisms which have been proposed as the cause of this difference include deep explanatory questioning by the tutor and self-explanation by the student, both of which target the student's deep understanding of domain concepts. Such interactions have been found to exhibit recurring dialogue patterns known as dialogue frames.
A further pattern in conversational communication is that of grounding, the process by which conversational partners reach mutual understanding and repair communication errors. The store of their mutual beliefs which is built up through grounding is known as the common ground. Just as the tutor requires the student to show evidence of understanding of domain concepts, so too does the speaker expect the hearer to indicate in the course of grounding that the utterance content has been understood. Our research will use concepts from grounding and the structure of dialogue frames to investigate the tutor's choice of whether to accept or reject solution steps or to ask explanatory questions which request evidence of understanding.
We consider the following hypotheses: First, by modelling the structure of discussions of solution steps, we can maintain a representation of what the student has shown evidence of having understood, based on the outcomes of previous such discussions. And second, knowing that discussion elicits self-explanation, we can then use this representation to predict whether to enter into a discussion of the latest contribution, using only features drawn from the previous dialogue and from the analysis of the steps the student just contributed. The computational model we will propose to investigate these questions will account for subdialogues which are solution step discussions, and is based on a concept which we will call task-level grounding. Task-level grounding is the process by which contributions to the current solution are proposed, possibly discussed and finally accepted or rejected. The object of this process is the student's deep understanding of domain concepts.
We evaluate the model in two stages. First we train a classifier from annotated data to predict whether to request evidence of understanding from the student or not, thereby entering into a solution step discussion. We find that the classifier takes advantage of the information maintained in our task-level grounding model, which confirms our first hypothesis. Although it performs well for acceptance and rejection of steps, the performance for the decision whether to request evidence of understanding is low. This result is mitigated by the second evaluation, in which we elicit ratings of the model's output in context from human experts. The model's output is rated as similarly appropriate compared to the corpus, which confirms our second hypothesis. We also find that the task of detecting when to perform requests for evidence of understanding is difficult even for human experts.