Interactive Intelligence

The Interactive Intelligence (II) section focusses on social, intelligent agents. We research the intelligence that underlies and co-evolves during the repeated interactions of human and technology “agents” who cooperate to achieve a joint goal. Our research program aims for synergy and social interaction between humans and technology, to empower humans in their social context. The new technological challenges we face arise from the need to integrate Artificial Intelligence, Cognitive Engineering, and behavioural sciences. In particular, the technological challenge is to develop socially aware agents that in interaction with humans co-adapt and co-learn over time. Social awareness implies context-awareness with the knowledge to interpret the physical situation in social terms and the knowledge to behave in a distinctive individual way that is personalized towards those the agent interacts with. In this manner, we endeavour to develop the interactive agent technology that empowers humans and groups of humans to deal with the societal and individual challenges such as the increasing need for sustained self-management for healthy ageing, safety and life-long education.


Research themes

Autonomous decision systems:
This research theme focuses on generic agent-oriented methods, techniques, and platforms to design and develop multi-agent systems. This includes agent programming for developing cognitive agent systems as well as formal approaches such as logic and game theory to analyse and verify such systems in order to enhance their reliability. Agent technology is a key enabler for open systems such as electronic markets, interactive decision support systems and hybrid teams (consisting of human- and artificial actors), and for simulating complex systems. The work of the II staff in this area is widely recognized. II's strategy is to strengthen the group's international position in agent-programming languages, agent-based simulation, organizational modelling, and formal approaches for intelligent systems.

                 1. Agent Programming
                 2. Shared Mental Models
                 3. SocioCognitive Robotics
                 4. Negotiation

Cognitive-affective agents:
For cognitive-affective agents and robots to be successfully integrated into our lives, they need cognitive reasoning, decision making, learning, and emotional capabilities. Learning is essential to cope with the diversity of environmental and task settings in which these agents should operate. Cognitive reasoning is essential to cope with the complexity of tasks and communicate about these tasks with human users. Affective abilities are needed to facilitate interaction between humans and cognitive-affective agents. Research in this theme focusses on the development and integration of learning, cognitive reasoning, and affective abilities.
For example, we develop the cognitive agent programming language GOAL that enables programmers to easily develop, debug, and deploy cognitive agents [1]. We develop emotion simulation techniques that can be integrated with such cognitive agents, such as, the emotional appraisal engine GAMYGDALA [2]. Further, we focus on the integration of reinforcement learning in both cognitive agents as well as emotional appraisal [3]. We use different methods to study these abilities including human-robot interaction studies with the NAO [4], but also in virtual agent settings and simulation.
1. Hindriks, K. and J.J. Meyer, Toward a programming theory for rational agents. Autonomous Agents and Multi-Agent Systems, 2009. 19(1): p. 4-29.
2. Popescu, A., J. Broekens, and M.v. Someren, GAMYGDALA: An Emotion Engine for Games. IEEE Transactions on Affective Computing, 2014. 5(1): p. 32-44.
3. Broekens, J., E. Jacobs, and C.M. Jonker, A reinforcement learning model of joy, distress, hope and fear. Connection Science, 2015: p. 1-19.
4. Xu, J., et al., Mood contagion of robot body language in human robot interaction. Autonomous Agents and Multi-Agent Systems, 2015. 29(6): p. 1216-1248.


Behaviour change support systems:
Maladapted coping strategies, changing demands, aspirations to improve one’s effectiveness or the quality of life are all examples of motivations for individuals to desire a change in their behavior, cognition or attitude. Establishing long-lasting change is however difficult to achieve. In some cases intelligent systems can successfully help individuals in making this transition. To establish, modify or maintain change these systems can deploy computerized persuasive strategies without using coercion or deception. The group works on establishing empirically grounded design principles, models, tools, and scientific understanding of effective and acceptable computerized mechanisms that facilitate behavior change. The research focuses specifically on the domain of health and wellbeing and on training of professionals such as search and rescue teams and the military. The group is internationally known for its success in the area of (1) virtual reality therapy systems, for example, for the treatment of social anxiety disorder (e.g. CATCH project) or PTSD (e.g. VESP project); and (2) for its work on virtual health agents such as in the context self-management for chronic illness (e.g. ADMIRE project), or insomnia treatment (e.g. Sleepcare project).  

Related key publications:

1. Hartanto, D., Brinkman, W. P., Kampmann, I. L., Morina, N., Emmelkamp, P. G., & Neerincx, M. A. (2015). Home-based virtual reality exposure therapy with virtual health agent support. In Pervasive Computing Paradigms for Mental Health (pp. 85-98). Springer International Publishing.
2. Cohen, I., Brinkman, W. P., & Neerincx, M. A. (2016). Effects of different real-time feedback types on human performance in high-demanding work conditions. International Journal of Human-Computer Studies, 91, 1-12.
3. Horsch C, Lancee J, Beun RJ, Neerincx MA, Brinkman WP (2015) Adherence to technology-mediated insomnia treatment: A meta-analysis, interviews, and focus groups. Journal of medical internet research,17(9):e214
4. Vakili, V., Brinkman, W. P., Morina, N., & Neerincx, M. A. (2014). Characteristics of successful technological interventions in mental resilience training. Journal of medical systems, 38(9), 1-14.
5. Hartanto D, Kampmann IL, Morina N, Emmelkamp PGM, Neerincx MA, Brinkman WP (2014) Controlling social stress in virtual reality environments. PLoS ONE 9(3): e92804. 
6. Blanson Henkemans, O. A., van der Boog, P. J., Lindenberg, J., van der Mast, C. A., Neerincx, M. A., & Zwetsloot-Schonk, B. J. (2009). An online lifestyle diary with a persuasive computer assistant providing feedback on self-management. Technology and Health Care, 17(3), 253-267.

Electronic Partners (ePartners):
This research theme aims at collections of electronic partners (ePartners) that support the social, cognitive and affective processes in a group of humans and ePartners (e.g., the management of children’s chronic diseases by these children and their caregivers). Each human group member has its personal ePartner with which it can enter into adjustable partnerships via agreements on these processes (e.g., on notifying the caregivers when the medical state changes suddenly). At group level general policies, authorizations and obligations can be defined. We aim at ePartners that can learn about the human goals, preferences and knowledge at the individual an at the group level (e.g., on individual responses and general response patterns on feedback). ePartners can be embodied as a (multimodal) user interface, virtual agent or social robot. They can act as personal assistant agent, virtual coach, location sharing system and assistive robots for making our lives more connected, healthy, efficient and safe by executing tasks on our behalf and guiding our actions. Whereas existing supportive technology is rigid in its realization of human social nature by hardwiring norms into the technology implicitly, ePartners address norms explicitly allowing for flexibility . It is our vision that supportive technology should be able to adapt to diverse and evolving norms of people in unforeseen circumstances, in order to better support people in their daily lives, at work and at home. This vision is being worked out in several domains, for example developing ePartners that support astronauts in future manned missions beyond low-earth ( MECA-project ), that support children with diabetes, children’s parents and the caregivers to learn to cope with their disease ( PAL-project), that support the reminiscence and activities of people with dementia (ReJAM-project), and that help rail traffic controllers to deal with anomalies in high-demanding situations (Railroad-project). Furthermore, we conceptualize the vision of “evolving situated norms” through the concept of a Socially Adaptive Electronic Partner (SAEP) that supports the daily activities of its owner. Our research focuses on the following challenges (see, e.g., CoreSAEP): Interaction: how to shape a SAEP's interaction with people about norms? Reasoning: how can a SAEP reason about and learn how to act in the face of norms and their (potential) violation? Ethical: to what extent can SAEPs alleviate ethical concerns about the use of supportive technology? This requires development of techniques that span the areas of normative agents, human-agent teamwork, and ethics of AI.

Related key publications:

1. M. Birna van Riemsdijk, Catholijn M. Jonker, Victor Lesser. Creating Socially Adaptive Electronic Partners: Interaction, Reasoning and Ethical Challenges. In Proceedings of the fourteenth international joint conference on autonomous agents and multiagent systems (AAMAS'15). 2015. IFAAMAS.
2. M. Birna van Riemsdijk, Louise Dennis, Michael Fisher, Koen V. Hindriks. A Semantic Framework for Socially Adaptive Agents: Towards strong norm compliance. In Proceedings of the fourteenth international joint conference on autonomous agents and multiagent systems (AAMAS'15). 2015. IFAAMAS.
3. Alex Kayal, Willem-Paul Brinkman, Rianne Gouman, Mark A. Neerincx, M. Birna van Riemsdijk. A Value-Centric Model to Ground Norms and Requirements for ePartners of Children. In Coordination, Organizations, Institutions, and Norms in Agent Systems IX (COIN'13), volume 8386 of LNCS, pages 329-345. 2014. Springer-Verlag.
4. M. Birna van Riemsdijk, Louise Dennis, Michael Fisher, Koen V. Hindriks. Agent reasoning for norm compliance: a semantic approach. In Proceedings of the twelfth international joint conference on autonomous agents and multiagent systems (AAMAS'13), pages 499-506. 2013. IFAAMAS.
5. J. van Diggelen and Mark Neerincx. Electronic partners that diagnose and guide and mediate space crew’s social and cognitive and affective processes. In Proceedings of Measuring Behaviour 2010, page 73–76, Wageningen and The Netherlands, 2010. Noldus Information Technology BV.