Project: Mindreading virtual characters
Project: Explaining virtual character behavior
Project: Social virtual humans
Mindreading virtual characters
Logic-based techniques to read others' minds from their observed behavior
The term ‘mindreading' (alarmingly esoteric as it may sound!) in psychological terminology designates an everyday human activity: thinking about what others believe or want. Our goal is to make virtual characters a bit more human by providing them with mindreading capabilities, which are grounded in observable behavior. BDI-based virtual characters are designed - or even programmed - in terms of their ‘mental state', i.e. their beliefs, goals, and plans. This powerful high-level abstraction allows for flexible behavior, and several tools (formalisms & software) exist for specification. However, little work deals with inferring mental states from observed behavior (mindreading), which is a desirable capability for virtual characters that should exhibit awareness of the mental states of others. Our work aims to contribute in this respect, by formalizing mindreading for BDIbased agents in regard to the observed behavior of other BDI-based characters and human players.
We have developed a generic approach for relating observable behavior to plans, which allows inference of BDI-based agents' mental states from knowledge of rules. This approach deals with plans that are partially observed, which can occur because the agent is still busy with its plan and/or because not all actions are observed. Inference of others' mental states along the lines of our framework is defeasible, meaning that conclusions - although plausible - could be false. In line with this view, a valid interpretation of an observed sequence of actions is that such a sequence represents a set of different ‘possible pasts' in which the observed agent had different mental states, of which one was its actual mental state. We have formalized this view for the case in which all actions are observed, in a framework based on dynamic logic. In the case of human players, a virtual mindreader has even less information available than in the case of software agents. Knowledge of characteristics of the environment are then a possible source of information, and we have investigated this source formally by means of logic, and practically by means of an implementation in the 2APL agent programming language of the classical Sally-Anne false-belief test scenario.
Inferred mental states can be considered explanations for observed behavior, and the fact that defeasible inference generates multiple different possibilities warrants the search for means to select a ‘best explanation' from the possible ones. In this regard we have already considered information from organizational context (roles and norms), and mean to incorporate other sources such as spatial distance metrics or probabilities extracted from past observations. Logical programming approaches come to mind for implementation and evaluation of our methods, of which answer set programming offers some promising possibilities that we intend to explore. Furthermore, recent work in agent programming has focused on the specification of emotions in terms of BDI concepts, and we consider applying our insights to such specifications in order to formalize the inference of (particular) emotions from observed behavior.
2.2 Modeling Cognitive Behavior of Virtual Characters
Utrecht, Intelligent Systems Group
M.P. Sindlar, M.M. Dastani, F.Dignum & J.-J.Ch. Meyer (2008), Mental State Abduction of BDI-Based Agents, Proceedings of the 6th International Workshop on Declarative Agent Languages and Technologies (DALT 2008), pp. 161-178.
M.P. Sindlar, M.M. Dastani & J.-J.Ch. Meyer (2009), BDI-Based Development of Virtual Characters with a Theory of Mind, Proceedings of the 9th International Conference on Intelligent Virtual Agents (IVA 2009), pp. 34-41.
M.P. Sindlar, M.M. Dastani & J.-J.Ch. Meyer (2010), Mental State Ascription Using Dynamic Logic, Proceedings of the 19th European Conference on Artificial Intelligence (ECAI 2010), to be published. More publications
John-Jules Meyer, Utrecht University
Modeling cognitive behavior of virtual characters
Explaining virtual character behavior
Behavior of virtual characters in games sometimes seems incomprehensible. But if people are supposed to learn from a game, they need to understand why characters behave the way they do. Therefore, virtual characters are being developed that are able to explain the reasons for their actions. Serious games are used for training of complex tasks like leadership, crisis management and negotiation. Virtual characters play the trainee's team members, colleagues or opponents. For effective training, the characters need to display realistic behavior. Moreover, it is important that the trainee understands why virtual characters behave the way they do. For instance, why did virtual team members not follow the instructions of their leader, or why did a negotiation partner make a certain bid? New technologies will allow players to actually ask virtual characters for the reasons behind their actions.
Human explanations have been taken as a starting point for developing virtual characters that are able to explain their behavior. Humans usually explain their own and other's actions in terms of underlying desires, intentions and beliefs. For instance, ‘I did this because I wanted to...', or ‘he did that because he thought that...'. To obtain similar explanations of virtual characters, they are programmed in a BDI programming language, in which their Beliefs, Desires (goals) and Intentions (plans) are explicitly specified. This makes it possible to explain each action of a character by the particular beliefs, goals and plans underlying that action. Often a set of goals and beliefs are responsible for one action, but not all of them are needed to explain the action. To avoid too long explanations, a selection of the beliefs and goals must be made. In experiments, subjects were asked to provide feedback on possible explanations of virtual characters. The results showed that for some types of actions beliefs were considered more useful as an explanation ("I thought that ..."), but for other action types goals were preferred ("I wanted to ..."). Furthermore, it was found that students more often preferred belief-based explanations, whereas teachers tended to prefer explanations in terms of goals.
Besides a virtual character's beliefs and goals, other factors like theory of mind, emotions or norms may explain its behavior. A theory of mind refers to the ability to attribute beliefs and goals to others. For instance, a character will act differently when he knows about someone else's plans, when he is angry or when he has to obey certain rules. An approach to incorporate these aspects in the BDI models of virtual characters is currently being developed. Finally, an experiment is being prepared in which the effect of explanations of virtual characters on learning is investigated. Two groups of subjects will play a training game, and afterwards their understanding in the played session will be measured. In between, one group will receive explanations of virtual characters, and the other group will not. The results of this study will show the contribution of explaining virtual character behavior.
2.2 Modeling cognitive behavior of virtual characters
Utrecht University, TNO
Harbers et al. (2009). A methodology for developing self-explaining agents for virtual training. Proc. MALLOW'009. Harbers et al. (2009). A study into preferred explanations of agentbehavior. Proc. International conference on Intelligent Virtual Agents,pp. 132-145.
Harbers et al. (2009). Modeling agents with a Theory of Mind. Proc. international conference on Intelligent Agent Technology, pp. 217-224. More publications
Karel van den Bosch, TNO
Modeling Cognitive behavior of virtual characters
Social virtual humans
Enhancing conversational virtual humans with social and emotional capabilities.
The demand for virtual humans that can engage in sophisticated dialogue with players of serious games and trainingsimulations is rapidly increasing. To facilitate this we aim to build new models of dialogue systems that incorporate social and emotional capabilities. Such enhancements increase the believability and realism of virtual humans. Virtual humans in serious games and trainingsimulation need to fulfill the same function as humans in the real life equivalent of such situations. These functions may include performing trainingrelated behaviors and provide explanations for the selection of those behaviors for which they may need to engage in natural conversation with the user. Some applications of virtual humans may concentrate completely on dialogue, for instance in the case of language and cultural learning. At the University of Twente we are working on improving the conversational skills of virtual humans by enhancing conversational virtual humans with capabilities to recognize and display social and emotional behavior.
In order to facilitate the enhancement of our virtual conversational humans we needed a solid and plausible framework of the cognitive processes that manages the processing, selection and realization of conversational behavior. In conversations, humans do not merely exchange information but they also engage in a social and emotional relationship. To that end we have examined various conversational virtual human systems and psychological approaches to cognitive, emotional and social processes to gain insight and inspiration. This has resulted in a cognitive model for virtual humans that represents the manner in which humans practically reason about mental abilities. In particular, the model tries to account for basic Theory of Mind modeling, i.e. reasoning about the intentions and emotions of the interlocutor. Through this model a virtual human is able to form beliefs about the world and have goals and intentions it wants to realize through conversational behavior. Furthermore it takes into account the emotional state of the virtual human and the social relation it has with its interlocutor so as to be able to calculate the expected effects of its conversational moves on the other's mental state. Subsequently we have studied how conversational behavior can be associated with the various components in the cognitive model.
Currently we are studying how to model the relation between individual conversational behaviors and high level cognitive processes such as intentions, emotions and social roles. Additionally we investigate how the emotional state influences the selection and/or realization of conversational behavior. There is a distinction between the way emotions lead to a certain conversational behavior and the manner in which emotions affect the execution of conversational behavior. The same distinction holds for conversational behavior that is influenced by social rules. By modeling and integrating the relationships between cognition and behavior, virtual conversational humans will become more believable and humanlike.
2.2 Modeling Cognitive Behavior of Virtual Characters
University of Twente
B. van Straalen et al. (2009) Enhancing embodied conversational agents with social and emotional capabilities. In "Agents for Games and Simulations", F. Dignum et al. (eds.), pp. 95-106. More publications
Dirk Heylen, University of Twente