Control States and Motivated Agency


Steve Allen

DFKI GmbH
(German Research Centre for Artificial Intelligence)
Deduction and Multiagent Systems
Stuhlsatzenhausweg 3
D-66123 Saarbrücken
Germany

Email: allen@dfki.de
Home: www.dfki.de/~allen


 
 

Extended Abstract

Sometimes a complex problem becomes more tractable when viewed from a different angle. Viewing minds as sophisticated self-modifying "control systems" leads to a new analysis of the concept of ‘representation’: a representation is an information-bearing sub-state of a control system – in our terminology: a representation is a control state. This use of control states allows a control system to be represented as a number of functionally independent sub-systems operating asynchronously and at different rates. For example, a thermostat can be represented by three control states: belief-like; desire-like; and action-like. Each control state represents an independent information-bearing sub-state of the control system – the thermostat can hold a belief that "the room is at 20°C", a desire to "make the temperature 23°C", and an action to "turn the radiator on".

Our view of a "control systems" is at odds with the standard notion of a "control system" used by physicists and control engineers. Conventional control systems have a fixed degree of complexity and their behaviour can be completely described by system of partial differential equations. The intelligent control systems that we wish to describe do not have a fixed architecture and are capable of development during the lifetime of the agent.

Within intelligent systems, many of the control states exhibit changes that are more like changing structures than like changing values of numeric variables – beliefs become rigid attitudes, and learnt behaviours become homed skills. Formally, we define two types of attributes for a control state: dimensional and structural. Dimensional attributes are quantitative attributes such as duration, and intensity. Structural attributes are predicates, relations, and propositions. Values of dimensional attributes can be expressed in terms of the structural attributes – i.e. the duration of an emotion can be expressed in terms of the propensity of a motivator to distract and hold attention and the consequent emergence of a perturbant state.

One of the challenges faced by researchers in the behaviour modelling of life-like characters is the need to develop a systematic framework in which to ask questions about the types of control states such life-like agents might posses, and how those different control states might interact. We propose a solution based on a recursive "design-based" research methodology – wherein each new design gradually increases our explanatory power and allows us to account for more and more of the phenomena of interest. These "broad but shallow" complete agents help to clarify our understanding of the attributes of different control states and their interaction within a multi-layered agent architecture (composed of reactive, deliberative and meta-management layers). Early experiments have concentrated on: the requirements of goal-processing; the emergence of perturbant (emotional) states; and the relationship between motives, goals, emotions, and personality. By describing a variety of functions using the "design stance" at the information-level, and showing how they implement mental states and processes, we aim to provide a rich and deep explanatory framework for motivated autonomous agency.
 

Issues

Pointers to Online Documentation

Allen, S. R. (1999). Concern Processing in Autonomous Agents. Working draft of PhD Thesis, School of Computer Science, University of Birmingham.
(http://www.cs.bham.ac.uk/~sra/Thesis)

Beaudoin, L. P. (1994). Goal Processing in Autonomous Agents. PhD Thesis, School of Computer Science, University of Birmingham.
(ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/Luc.Beaudoin_thesis.ps.Z)

Burt, A. (1998). Modelling Motivational Behaviour in Intelligent Agents in Virtual Worlds. In the Proceedings of the 1998 Conference on Virtual Worlds and Simulation.

Sloman, A. (1993). The mind as a control system. In C. Hookway, and D. Peterson (Eds.), Proceedings of the 1992 Royal Institute of Philosophy Conference 'Philosophy and the Cognitive Sciences'. Cambridge: Cambridge University Press, pages 69-110
(ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/Aaron.Sloman_Mind.as.controlsystem.ps.Z)

Sloman, A. (1999). Architectural Requirements for Human-like Agents Both Natural and Artificial. (What sorts of machines can
love? ). To appear in K. Dautenhahn (Ed.) Human Cognition And Social Agent Technology, in the "Advances in Consciousness
Research" series, John Benjamins Publishing.
(ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/Sloman.kd.ps)

Sloman, A. and Logan, B. S. (1998) Architectures and Tools for Human-Like Agents, In F. Ritter and R. M. Young (Eds.), Proceedings of the 2nd European Conference on Cognitive Modelling. Nottingham: Nottingham University Press, pages 58-65.
(ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/Sloman.and.Logan.eccm98.ps.gz)

Sloman, A. and Poli, R. (1996). SIM_AGENT: A toolkit for exploring agent designs. In Proceeding IJCAI workshop on Agents Theories Architectures and Languages ATAL'95, Springer-Verlag Lecture Notes in Computer Science, 1996.
(ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/Aaron.Sloman_Riccardo.Poli_sim_agent_toolkit.ps.Z)

Wright, I. P. (1997). Emotional Agents. PhD Thesis, School of Computer Science, University of Birmingham.
(http://www.cs.bham.ac.uk/~ipw/thesis.ps.Z)