How to Consider Personality Factors

In Simulating a ‘Reasonable’ and ‘Natural’ Behaviour of Agents?

Fiorella de Rosis,

Intelligent Interfaces

Department of Informatics

University of Bari

It is becoming a generally shared opinion that personality affects Human-Computer interaction in the two directions: computers show a personality in their style of communication, which is perceived by the user and can influence their usability according to whether it is similar or complementary to the user personality. Studies have also shown that ‘even the most superficial manipulations of the interface are sufficient to exhibit personality, with powerful effects’ (Nass et al, 1995); these effects are going to grow considerably with the diffusion of agent-based interaction.

The aspect that has been investigated with more frequency, among the five orthogonal factors of personality (the ‘Big Five’ structure) is the ‘extraversion’ (dominance/submissiveness) dimension of interpersonal behaviour (Nass et al, 1995; Dryer, 1998; Ball and Breese, 1998). Other ‘extrarational’ attitudes that have been proven to affect communication are humor, flattery, blaming and politeness (Fogg and Nass, 1997; Moon and Nass, 1998; Nass, Moon and Carney, in press ). The prevailing interest for these aspects of graphical or agent-based interfaces concerns their ‘observable expression’, that is the way personality traits and emotions manifest themselves in a natural or artificial agent: wording choice and speech characteristics in natural language messages, facial expression and body gestures or movements (Dryer, 1998; Breese and Ball, 1998 and many more contributions in the same Proceedings).

The objective of these Projects is, on one side, to generate, in life-like agents, personality-rich behaviours; on the other side, to recognise similar behaviours in other agents, such as the user.  The way that the behaviour of personality-rich agents is programmed is by defining ‘activation rules’, either in a logical form (Binsted, 1998) or in conditions of uncertainty (Ball and Breese, 1998); these rules define how agents react to a context-driven and/or to an internal emotional or personality state by showing some form of behaviour.

Personality, though, is not only a question of ‘style of communication’, but ‘represents those characteristics of the person that account for consistent patterns of feeling, thinking and behaviour’ (Nass et al, 1995). Personality traits were represented, in the first study on this subject, as combinations of degrees of importance assigned to goals (Carbonell, 1980); subsequently, they were seen as dichotomic attributes that trigger reasoning rules (see the definition of ‘sincere’ and ‘helpful’ in Cohen and Levesque, 1990) or as numerical combination of degrees of attributes (like in the definition of ‘trill seeking’, in Wilson 1997). The aspect of personality-rich agents on which we focused, in particular, our attemps of formalisation is ‘thinking’: in trying to build a cognitive model of personality-rich agents, we suppose that agents themselves are represented by a BDI architecture, to study which are the aspects of their mental state and of their reasoning process that can be varied according to personality. In this paper, we propose a –preliminary- solution to these questions that comes form our experience in two different ongoing Projects.

1. The GOLEM Project (in cooperation with C Castelfranchi, R Falcone and S Pizzutilo).

Dominance has been defined as ‘a disposition towards controlling or being controlled by others’ (Breese and Ball, 1998). Dominant individuals are seen as ‘able to give orders, talk others into doing what they want and often assuming responsibility’ (Fogg and Nass, 1995). This explains why this is considered, by now, the most relevant personality factor in human-computer interaction, especially in those systems that are aimed at facilitating the user performance of some given task: for instance, Animated Presenters or Pedagogical Agents and Personal Service Assistants (Lester et al, 1998; Andre’ et al, 1993; Arafa et al, 1998 ). In these systems, agents are given a generically or precisely defined task, that they have to perform with some degree of autonomy.

We started from a theory of autonomy that was defined by Castelfranchi and Falcone (1996), to investigate how levels and types of delegation and help can be formalised in terms of ‘personality traits’; we then simulated interaction between two agents, both endowed with a delegation and a help trait, to investigate the consequences of various combinations of these traits into performance of tasks. Agents in GOLEM are logical programs; their mental state includes a set of reasoning rules (that link first and second-order beliefs and goals) and basic beliefs (ground formulae). Personality influences the mental state in that some of the reasoning rules are personality-dependent. Let us see some examples:

delegation attitudes:

a lazy agent:

always delegates tasks if there is another agent who is able to take care of them:

(Lazy Ai) -> (Forall a Forall g ((Goal Ai (T g)) and (Goal Ai (Evdonefor a g))) ->

(Exists Aj (Bel Ai (Cnd Aj a)) -> (Goal Ai (IntToDo Aj a))));

it acts by itself only when there is no alternative:

(Lazy Ai) -> (Forall a Forall g ((Goal Ai (T g)) and (Goal Ai (Evdonefor a g))) ->

(not Exists Aj (Bel Ai (Cnd Aj a)) and (Bel Ai (Cnd Ai a))) -> (Bel Ai (IntToDo Ai a))));

and renounces if it believes that nobody can do the action:

(Lazy Ai) -> (Forall a Forall g ((Goal Ai (T g)) and (Goal Ai (Evdonefor a g))) ->

(not Exists Aj (Bel Ai (Cnd Aj a)) and (Bel Ai not(Cnd Ai a))) -> (Bel Ai (CurrentlyUnachievable a g)))).

a hanger-ontends to never act by itself;

a delegating-if-needed asks for help only if it is not able to do the task by itself;

a never-delegating considers that tasks should only be achieved if it can perform them.

helping attitudes: a hyper-cooperative always helps if it can;

a benevolent agent:

first checks that the other agent could not do the action by itself:

(Benevolent Ai) ->

(Forall a Forall Aj ((Bel Ai (Goal Aj (IntToDo Ai a))) and (Bel Ai (Cnd Ai a)) and

not Exists g ((Goal Ai (T g)) and (Bel Ai (Conflict a g))) -> (Bel Ai (IntToDo Ai a)));

otherwise, it refuses:

(Benevolent Ai) ->

(Forall a Forall Aj ((Bel Ai (Goal Aj (IntToDo Ai a))) and (Bel Ai not (Cnd Ai a)) or

Exists g ((Goal Ai (T g)) and (Bel Ai (Conflict a g))) -> (Bel Ai not (IntToDo Ai a))).

a supplierfirst checks that the request does not conflict with its goals;

a selfish helps only when the requested action achieves its own goals;

a non-helper never helps, on principle;

 helping levels: a literal helper restricts itself to considering whether to perform the requested action;

a overhelper goes beyond this request, to hypothesize a delegating agent’s higher order goals, and helps accordingly;

a subhelper performs only a subset of the requested plan (for instance, the subset it is able to perform);

a critical helper modifies the delegated plan by, at the same time, considering literally the request, or going behind it, or responding only partially to it.

 The way that delegation and help traits are combined is defined so as to insure that each agent has a plausible (from the cognitive viewpoint) mental state. This means building agents through multiple inheritance of personality-trait-based, compatible stereotypes, namely agents whose mental state is a combination of a set of general and a set of trait-specific reasoning rules, in addition to a set of basic beliefs. As we showed in the previous examples, reasoning rules include, among their belief and goal atoms, hypotheses about the other agent’s mental state: for instance, whether it is able to perform some action, whether it intends to perform it,... and so on Agents in GOLEM are able to perform several forms of reasoning: domain planning and plan recognition, goal-driven inference, cognitive diagnosis on the other agent’s mental state, and so on. Some of these forms of reasoning are common to all agents; others depend on their personality. For instance: a overhelper needs to be able to recognise the other agent’s goals and plans, in order to help it effectively, while a literal helper does not need the same;

a supplier has to be able to examine its own plans and goals, to check whether conflicts with the other agent exist, whereas a hyper-cooperative does not need to perform this type of reasoning;

a deep-conflict-checker needs to make some plan-recognition on the other agent’s mind, followed by a refined analysis of conflicts,while a surface-conflict-checker only needs to check conflicts between its own goal and the state that would be reached with the requested action,...and so on.

2. The XANTHIPPE Project (in cooperation with C Castelfranchi, F Grasso and I Poggi).

The medical domain is one of those in which social roles, personality and emotions especially affect interaction between agents: doctor-to patient, doctor-to colleague, doctor-to nurse communication is strongly influenced by these factors, and flattery, blaming, politeness, and various forms of insincerity play a crucial role in it. We then took this application field as the one in which to examine how dialogues between agents can be simulated by trying to save at least part of the ‘believability’ of naturally occurring conversations. A preliminary analysis of a corpus of transcripts showed us a number of examples of cases in which the reasoning process that guides the dialog could not be seen as a pre-defined sequence of steps, but strongly depended on the mentioned factors (participants’ personality, roles and emotional state). In simulating, in particular, conflict-resolution dialogs in XANTHIPPE, we then assumed that conversational runnings are the consequence of the reasoning strategies adopted by the two interlocutors and that these depend, in their turn, on personality factors. We defined two types of personality traits:

traits that affect the agent’s mental state in a similar way as in GOLEM, that is by introducing personality-based rules into the mental state of each agent (though the traits considered are different). For instance:

an anxious agent tends to avoid performing actions that might have negative consequences (such as taking drugs whose side effects are serious);

a conservative agent tends to have moral or psychological biases towards some forms of contraception,

... and so on. traits that affect the reasoning style; these are defined, again, differently from those in GOLEM, as they are typical of conflict-resolution dialogues; one might find, however, some similarities between traits introduced in the two systems. Some examples of personality traits in XANTHIPPE:

an altruistic agent considers systematically the other agent’s viewpoint before taking any decision;

a persistent tends to try to convince the other agent to change of mind, when a divergence of beliefs is discovered;

a defensive tends to select elusion and reticence as a type of answer, in case of ‘difficult’ questions;

a non-polemic tends to avoid noticing lies, elusion or reticence,

...and so on.
In analysing our corpus of conversations between doctors and patients in various contexts, we noticed, in particular, that both interlocutors were recurring to various forms of deception in their behaviour; we took this finding as an evidence of the need to relax the assumption of ‘sincere assertion’ that is typical of the majority of multiagent worlds, if more ‘natural’ dialogues have to be simulated. We are investigating, at present, how the decision to deceive and the discover of a deception can be simulated by representing the two agents’mental states as a set of belief networks and by endowing them with the ability of applying to these networks several forms of uncertainty-based reasoning.

3. Perspectives

The high-level goal of to the two described Projects is to come to adapt human-computer interaction to personality factors, by getting over the present situation in which these factors are introduced implicitly in the interface (a feature that is probably responsible for many of the refusals or difficulties in using systems). This goal is similar to the goals of other groups that are working, at present, in emotion and personality-based interaction. What we would like to obtain, in particular, is that, at their first interaction with some application, users are enabled to ‘declare’ their delegation attitude and to select the helping attitude and level they would like to see in the interface for that application. Although various attitudes should be enabled in interfacing with different applications (according to the user experience in that particular field), some common attitude, related to a general personality trait of the user (his or her overall ‘tendency to delegate’) should be taken as a baseline in setting the interface attitude for a new application.

Main References

K Binsted: A talking head architecture for entertainment and experimentation.

Proceedings of the Workshop on Emotional and intelligent, the tangled knot of cognition. 1998

G Ball and J Breese: Emotion and personality in a conversational character.

Proceedings of the Workshop on Embodied Conversational Characters, Tahoe City, october 1998.

J Breese and G Ball: Bayesian networks for modeling emotional state and personality: progress report.

Proceedings of the Workshop on Emotional and intelligent, the tangled knot of cognition. 1998

J Carbonell: Towards a process model of human personality traits. Artificial Intelligence, 15, 1980

C Castelfranchi and R Falcone: Towards a theory of delegation for agent-based systems.

Robotics and Autonomous Systems, Special Issue on Multiagent Rationality. 1988

P R Cohen and H Levesque: Rational Interaction as the basis for communication. In Intentions in Communication. P R Cohen, J Morgan and M E Pollack (Eds), The MIT Press, 1990.

C D Dryer: Dominance and valence: a two-factor model for emotion in HCI.

Proceedings of the Workshop on Emotional and intelligent, the tangled knot of cognition. 1998

B J Fogg and C Nass: Silicon sycophants: the effects of computers that flatter.

International Journal of Human-Computer Studies, 46, 1997

K Isbister and C Nass: Personality in conversational characters: building better digital interaction partners using knowledge about human personality preferences and perceptions.

Proceedings of the Workshop on Embodied Conversational Characters, Tahoe City, october 1998.

Y Moon and C Nass: Are computers scapegoats? Attributions of responsibility in human-computer interaction.

International Journal of Human-Computer Studies, 49, 1998

C Nass, Y Moon, B J Fogg, B Reeves and D C Dryer: Can computer personalities be human personalities?

International Journal of Human-Computer Studies, 43, 1995

C Nass, Y Moon and P Carney: Are respondents polite to computers? Social desirability and direct responses to computers. Journal of Applied Social Psychology,in press.

Papers on GOLEM and on XANTHIPPE may be downloaded from the URL: