Talking to the Swedish Chef: Social Interactions in Recommender Systems
Jarmo Laaksolahti & Annika Waern
We explore the possibility of using synthetic characters as a means of providing inspection and control over user models. We discuss two aspects of this: the usage of emotional (facial) expressions to convey the system's certainty in its predictions, and the use of cultural or social characters to convey areas of interest. We present an imaginary example of the approach: a recommender system for recipies in which user's interact with chefs with different cultural backgrounds.
Inspecting User Models
Designers of user-adaptive systems face the problem of visualizing the systems assumptions about users (Höök 1996). The problem is usually addressed by adding a separate interface component in which users can inspect and modify the user model. This is however not an optimal solution. Firstly, the task of inspecting and modifying the user model does not fall within the user's primary task, and users will for this reason resent doing it. Secondly, this model of interacting with the adaptivity has no natural counterpart, and is for that reason difficult to grasp.
An alternative approach has been to entirely avoid presenting the system's assumptions, and rely on monitoring the user's actions to provide the information required for setting up the user model. This approach can only support very weak user modeling, as the information acquired this way often is of low quality with respect to what the system is aiming to model (Waern 1996).
However, human interaction is fundamentally social and adaptive. Reeves and Nass (1996) have shown that the social aspects of human-human interaction very easily carry over to human-system interaction - the computer is seldom seen as a pure tool. Possibly, the aspects of user modeling that a system needs to negotiate with the user can be given a natural metaphor by recasting them as aspects of social interaction.
Using Facial Expressions for UM Introspection
The usage of social interaction for user model visualization is not entirely new. In the early work by Kozierok and Maes (1993), a mail filtering agent was introduced that used facial expressions to provide feedback on how well the system was predicting the user's actions. The email filtering system displayed a cartoon face, which had a number of facial expressions available. If the filter algorithm could not determine how to filter a particular mail, the cartoon face would display a confused expression. If it had a suggestion, it showed as a happy face with a lightbulb. If the suggestion was confirmed by the user, the face would change to a gratified expression (this signifying that the learning algorithm enforced the connections that had lead to the suggestion). On the other hand, if the suggestion was rejected, the expression would change to confused (and the connections would be weakened in the user model). This way, some internals of the user modelling algorithm were made explicit to the user using a natural metaphor.
Note that the system presented by Kozierok and Maes was limited to model system certainty. It is not completely obvious that the system-user interactions were improved by this introspection model, since the user could not act on the information. There was no way for the user to respond to the surprised expression by saying something like: 'well, I think that you are doing well overall anyway' (which could lead to a smaller change to the algorithm). It is way more interesting to give cues to the actual properties that the system believes the user to have, as the user will want to affect this. The mail system was limited to acquire this feedback implicitly from user interactions.
Modelling User Preferences
The mail filter agent was an example of an adaptive system that used preference/interest modelling. User Models can be used to model many different aspects of users, such as their competencies, their current task, their cognitive abilities etc. The currently most successful commercial systems deal with modelling user interests and preferences, often based on learning from the collective behavior of users with similar interests, so-called recommender systems (ACM 1997). In this article, we limit ourselves to discussing modelling user interests/preferences.
A key to social models of user preferences can be found by observing a problem for current recommender systems: they tend to become rather conservative. Since the preferences are based on previous choices by yourself or by users similar to yourself, you end up with recommendations that are very similar to what was recommended previously. There are few surprises and adventures to be found. In social life, we do not only seek to interact with those who are similar to ourselves. Rather, we will often enjoy the recommendations given by people that are different from ourselves, in exploring a new area of interest. One way to utilize this in human-computer interaction is to provide recommendations through various characters that iconify different areas of interests, cultures, or expertize.
The use of characters to signal areas of interest thus means that the system does not describe the user's preferences, but instead describes the preferences of a counterpart that you may of may not select to interact with. This agent can in turn reflect upon properties of the user, commenting on similarities and differences between the user's domain of interest and that of the character.
The Chef Community: an Example Vision
An area where people usually have vast differences in preference, and where recommendations provided by different characters could inspire users to explore the area, is cooking. It is an incredibly feature rich domain that holds large amounts of information. The looks, and appearance of the chef as well as what tools and ingredients he/she uses give away lots of clues as to what kind of dishes are available. We consider a design of a recommender system for recipes that utilizes this rich information.
The system is implemented as a set of restaurants, situated in busy street, which users can visit. Each restaurant has its own chef equipped with a distinct personality, capturing his/her culture in a caricature. The chef also has strong opinions about how food should look and taste. To get a recommendation users go into a restaurant of their liking and talk to the chef. The chef and the user engage in an interaction that might result in a recipe recommendation, or in the chef suggesting another restaurant. Since people usually do not want to eat the same thing every day, new restaurants and new chefs can be visited every day. Communities of recommendation giving characters thus organized provide users with ample opportunity to be both surprised and experience adventures. Users do not only get recommendations about food they already know they like, but can be inspired to try out new courses or even be introduced to entirely new culinary cultures.
To capture individual differences in preferences, allergies etc. the approach should be integrated with a more standard preference model for the user. Since the user interacts with chefs with their own models, this model can be maintained both by implicit and explicit means. When the user selects recipes suggested by a particular chef, this means not only that the recipe itself is interesting to the user, but that the preferences in the chef's model are to some extent in accordance with the user's. This gives additional information for implicit user modeling. Furthermore, the interaction between the chef and the user provides explicit information about user preferences in a way that is integrated with the user's main task, that of selecting recipes.
It would be very interesting to compare this way of conveying and maintaining user model information should be evaluated in comparison with a more standard approach. It is easy to envision a design of the chef scenario that does not involve characters at all, but that aim to convey the same kind of information about user preferences in headers, windows etc. We would like to construct an experiment in which these two approaches are compared. In particular, we would like to investigate how the selected design affects user trust in the system, user's ability to predict the system's suggestions, as well as their willingness to engage in a dialogue that is sufficiently rich to acquire proper user model characteristics.
Höök Kristina. 1996. A Glass-Box Approach to Adaptive Hypermedia. Ph.D. Thesis, Dept. of Computer and System Sciences, Stockholm University.
Byron Reeves & Clifford Nass. 1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.
Kozierok, R., and Maes, P. 1993. A learning interface agent for scheduling meetings. In Proceedings of the ACM SIGCHI International Workshop on Intelligent User Interfaces, 81-88. Orlando, Florida: ACM Press.
Waern, Annika. 1996. scognition for a Purpose: Issues for Plan Recognision in Human-Computer Interaction. Ph.D. thesis, Dept. of Computer and System Sciences, Royal Institute of Technology.
CACM. March 1997. Special issue on recommender systems.