Nagging the Student into Knowledge: a Pedagogical Assistant for Reflective Learning

Fredrik Rutz and Jakob Tholander

Department of Computer and System Sciences
Stockholm University/Royal Institute of Technology
Electrum 230
164 40 Kista
Sweden
rutz@dsv.su.se, jakobth@dsv.su.se






1. Introduction

We discuss our work in designing a learning companion, or pedagogical assistant, for a modelling tool for information system design. The purpose of the assistant is to encourage the student to reflective and critical thinking. The task of information systems modelling is open-ended in nature which provides for multiple solutions which all could be appropriate depending on the perspectives taken. This fact makes it harder to design a tool based on traditional Intelligent Tutoring Systems (ITS) techniques. Instead we use insights from the discussion on situated learning and cognitive apprenticeship as a starting point for our design. We believe this work coupled with work on assistants is a promising approach for the design of learning support.

2. Agents and Assistants

[Erickson, 1997] distinguishes between two different kinds of agents. Adaptive agents and the agent metaphor. Adaptive agents may be intelligent, adaptive and responsive. They may also be proactive and goal driven. The agent metaphor has been shown in promotional videos such as the ”Knowledge Navigator” . Such agents suggest a certain interaction such as natural language, voice recognition, social behaviour and often antropomorphisation.
An assistant corresponds today to Erickson’s agent metaphor. Examples of assistants are Microsoft Research’s Lumiere  project which has resulted in the help assistant in the Microsoft office package. Another example is Peedy , a parrot which ”understands” voice spoken commands about music and plays the music chosen by the user [Ball et al, 1997].

3. The role of reflection in learning

Within theories on learning, especially situated perspectives and cognitive apprenticeship, it is emphasised that a central aspect of expertise is the use of metacognitive strategies and conceptual maps over problem solving tasks [Collins, Brown and Newman 1989]. These strategies and maps are used to organise the execution of tasks, relate new knowledge and feedback from mentors and teachers, and guide further investigation into a knowledge domain. One way of fostering such skills is to encourage reflection and articulation of one’s own performances, solutions, and problems. Central in the cognitive apprenticeship model is the interaction between apprentices and mentors, or students and experts. By engaging in shared problem solving, the student receives scaffolding, general support, and can pick up tacit knowledge from the expert. This interaction helps developing students’ awareness of their own problem solving and can relate it to expert performances.
An assistant could contribute in a learning scenario by providing a broader context for the learning situation. The student could have an assistant that collaborated with her and helped her reflect upon her task. [Reeves and Nass, 1997] argues that people attribute human emotions to technology, which causes changes in moods and expectations of the user, and this transfers to assistants as well. This has been shown in multiple reports where the positive change of expectations in the user is provided by an assistant, for instance in pedagogical assistants [Lester et al, 1997] and in poker playing [Koda and Maes, 1996].
How should an assistant act to encourage the student to reflect? Ramberg and Karlgren argues, convincingly, that the transition from novice to expert may in part be the transition from knowing the abstract explanations to being involved in the language games of the area, i. e. being able to use the language of the domain. [Ramberg and Karlgren, 1998]. An assistant could encourage the student to use language for reflection and articulation of their ongoing activities. A similar approach is used in [Goodman et al, 1998].

4. Scenario and design

The system will be based on the Microsoft agent  technology and the Rose modelling tool from Rational . While the student works on her model in Rose, the agent watches and observes, occasionally making a comment, question or suggestion. A general outline of some student interactions might look like this:

The assistant asks a question about why the student has created some particular objects and/or relations. The student then responds with his/her rationale for the chosen solution. Depending on the student's response the assistant could either continue and ask for more responses, or  let the student continue. Some time later, the assistant proposes a change to the student's model and asks for the student's opinion about it. Again, depending on the student's response, the assistant could either continue and ask for more responses, or let the student continue. Suddenly, the assistant shows a model or submodel created by somebody else and depending on the student's response the assistant could either continue and ask for more responses, or let the student continue. Finally, the student asks the assistant for something which she does not understand and the assistant answers with a general concept from modelling theory.

The prototype will consist of the following functional components:

1 - A component for choosing what questions and statements to confront the student with. This method will chose from four different categories of questions and statements: i),specific questions in an Eliza like fashion about some of the student's constructs, ii) questions about general concepts and methods in modelling theory, iii) general questions about how the student intend to handle some part of the domain description, and iv) general comments to encourage reflection and critical thinking. The choice among these different alternatives will be based on the level of the students modelling skills, her level of activity, how far in the problem solving process she has proceeded, and the quality of her solution.

2 - A component for deciding the quality of her solution. This component will use a database of previous solutions done by experts and students and will initially contain some preconstructed solutions. As students work, their solutions are graded and stored in the database as the system is used. The grade will reflect the quality of the model and will initially be done by a human agent.

3 - A component for deciding the student's knowledge level that consists of a pre-test given to the student when starting to use the system. The pre-test will set the student's initial knowledge level in the system and will be changed according to how she achieves in the exercises. This will be used to adapt the information the assistant gives when making thought provoking statements.

4 - A component for deciding when to interfere the student with questions and statements. This will be based on the students level of activity, the type of activity she is engaged in, and the character of her solution. We will need to collect empirical data to be able to tune this interference to an appropriate level.

5. Discussion

A new approach is needed to implement computer based tutors in ITS systems. The traditional ITS techniques does not transfer well to open-ended tasks such as information systems modelling. Some has also argued that such systems take away many tasks which are important for learners to carry out themselves, like error analysis and self critiquing. We have therefore constructed a system that builds on cognitive apprenticeship and uses ITS techniques to encourage the user to reflect and think critically.

6. References

[Ball et al, 1997] Ball, G., D. Ling, et al., Eds. (1997). "Lifelike Computer Characters: the Persona project at Microsoft Research." Software Agents. Ed. Bradshaw, M. UK, MIT Press.

[Collins, Brown and Newman 1989] Collins, A., Brown, J.S., & Newman, S.E. (1989). "Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics." Knowing, learning, and instruction. Ed. Resnick, L.B. Hillsdale, N.J., Erlbaum.

[Erickson, 1997] Ericksson, T. (1997). "Designing agents as if people mattered." Software Agents. Ed. Bradshaw, M. UK, MIT Press: 79-96.

[Goodman et al, 1998] Goodman, B., Soller, A., Linton, F., Gaimari, R. (1998). "Encouraging Student Reflection and Articulation using a Learning Companion." International Journal of Artificial Intelligence in Educatiuon (9).

[Koda and Maes, 1996] Koda, T., Maes, P. (1996). "Agents with Faces: the Effects of Personification of Agents." HCI' 96, London, UK, The British HCI Group.

[Lester et al, 1997] Lester et al (1997). "The Persona Effect: Affective Impact of Animated Pedagogical Agents." CHI 97,  Atlanta, GA, acm press.

[Ramberg and Karlgren, 1998] Ramberg, R. and Karlgren, K. (1998). "Fostering Superficiality in Learning." Journal of Computer Assisted Learning(14): 120-129.

[Reeves and Nass, 1997] Reeves, B. and C. Nass (1996). "The Media Equation: How People Treat Computers, Television and New Media like Real People and Places." Cambridge, Cambridge University Press.
 



Apple Computer vision video, 1987.
http://www.research.microsoft.com/research/dtg/horvitz/lum.htm
http://www.research.microsoft.com/research/vision/mturk/peedy.htm
http://www.microsoft.com/workshop/imedia/agent/default.asp
http://www.rational.com/index.jtmpl