DFKI-LT - Trusting in Human-Robot Teams Given Asymmetric Agency and Social Sentience
Trusting in Human-Robot Teams Given Asymmetric Agency and Social Sentience
1 Proceedings of the 2013 AAAI Spring Symposium on Trust and Autonomous Systems, Stanford, CA, USA, AAAI Press, AAAI, Stanford University, 3/2013
The paper discusses the issue of trusting, or the active management of trust, in human-robot teams. The paper approaches the issue from the viewpoint of asymmetric agency, and social sentience. The assumption is that humans and robots experience reality differently (asymmetry), and that a robot is endowed with an explicit (deliberative) awareness of its role within the team, and of the social dynamics of the team (social sentience). A formal approach is outlined, to provide the basis for a model of trusting in terms of (i) trust in information and how to act upon that (as judgements about actions and interactions, at the task-level), and (ii) the reflection of trust between actors in a team, in how social dynamics get directed over time (team-level). The focus is thus primarily on the integration of trust and its adaptation in the dynamics of collaboration.
Files: BibTeX, main.aaaisstrust.kruijff.pdf