(German Research Centre for Artificial Intelligence)
Deduction and Multiagent Systems
Our view of a "control systems" is at odds with the standard notion of a "control system" used by physicists and control engineers. Conventional control systems have a fixed degree of complexity and their behaviour can be completely described by system of partial differential equations. The intelligent control systems that we wish to describe do not have a fixed architecture and are capable of development during the lifetime of the agent.
Within intelligent systems, many of the control states exhibit changes that are more like changing structures than like changing values of numeric variables – beliefs become rigid attitudes, and learnt behaviours become homed skills. Formally, we define two types of attributes for a control state: dimensional and structural. Dimensional attributes are quantitative attributes such as duration, and intensity. Structural attributes are predicates, relations, and propositions. Values of dimensional attributes can be expressed in terms of the structural attributes – i.e. the duration of an emotion can be expressed in terms of the propensity of a motivator to distract and hold attention and the consequent emergence of a perturbant state.
One of the challenges faced by researchers in the behaviour
modelling of life-like characters is the need to develop a systematic framework
in which to ask questions about the types of control states such life-like
agents might posses, and how those different control states might interact.
We propose a solution based on a recursive "design-based" research methodology
– wherein each new design gradually increases our explanatory power and
allows us to account for more and more of the phenomena of interest. These
"broad but shallow" complete agents help to clarify our understanding of
the attributes of different control states and their interaction within
a multi-layered agent architecture (composed of reactive, deliberative
and meta-management layers). Early experiments have concentrated on: the
requirements of goal-processing; the emergence of perturbant (emotional)
states; and the relationship between motives, goals, emotions, and personality.
By describing a variety of functions using the "design stance" at the information-level,
and showing how they implement mental states and processes, we aim to provide
a rich and deep explanatory framework for motivated autonomous agency.
L. P. (1994). Goal Processing in Autonomous Agents. PhD Thesis, School
of Computer Science, University of Birmingham.
Burt, A. (1998). Modelling Motivational Behaviour in Intelligent Agents in Virtual Worlds. In the Proceedings of the 1998 Conference on Virtual Worlds and Simulation.
A. (1993). The mind as a control system. In C. Hookway, and D. Peterson
(Eds.), Proceedings of the 1992 Royal Institute of Philosophy Conference
'Philosophy and the Cognitive Sciences'. Cambridge: Cambridge University
Press, pages 69-110
A. (1999). Architectural Requirements for Human-like Agents Both Natural
and Artificial. (What sorts of machines can
love? ). To appear in K. Dautenhahn (Ed.) Human Cognition And Social Agent Technology, in the "Advances in Consciousness
Research" series, John Benjamins Publishing.
A. and Logan, B. S. (1998)
Architectures and Tools for Human-Like Agents, In F. Ritter and R. M. Young
(Eds.), Proceedings of the 2nd European Conference on Cognitive Modelling.
Nottingham: Nottingham University Press, pages 58-65.
A. and Poli,
R. (1996). SIM_AGENT: A toolkit for exploring agent designs. In Proceeding
IJCAI workshop on Agents Theories Architectures and Languages ATAL'95,
Springer-Verlag Lecture Notes in Computer Science, 1996.
I. P. (1997). Emotional Agents. PhD Thesis, School of Computer
Science, University of Birmingham.