Navare, U.P., Ciardo, F., Kompatsiari, K., De Tommaso, D., & Wykowska, A. (2023). Neural markers of self-other integration in joint action – why attribution of intentionality matters. https://psyarxiv.com/djbn4
Wykowska, A. (2021). Robots as mirrors of the human mind. Current Directions in Psychological Science. https://doi.org/10.1177/0963721420978609 Full e-print here
Abubshait, A. & Wykowska, A. (2020) Repetitive Robot Behavior Impacts Perception of Intentionality and Gaze-Related Attentional Orienting. Frontiers in Robotics and AI 7:565825. doi: 10.3389/frobt.2020.565825
ABSTRACT
The present study highlights the benefits of using well-controlled experimental designs, grounded in experimental psychology research and objective neuroscientific methods, for generating progress in human-robot interaction (HRI) research. More specifically, we aimed at implementing a well-studied paradigm of attentional cueing through gaze (the so-called “joint attention” or “gaze cueing”) in an HRI protocol involving the iCub robot. Similarly to documented results in gaze-cueing research, we found faster response times and enhanced event-related potentials of the EEG signal for discrimination of cued, relative to uncued, targets. These results are informative for the robotics community by showing that a humanoid robot with mechanistic eyes and human-like characteristics of the face is in fact capable of engaging a human in joint attention to a similar extent as another human would do. More generally, we propose that the methodology of combining neuroscience methods with an HRI protocol, contributes to understanding mechanisms of human social cognition in interactions with robots and to improving robot design, thanks to systematic and well-controlled experimentation tapping onto specific cognitive mechanisms of the human, such as joint attention.
ABSTRACT
Robots will soon enter social environments shared with humans. We need robots that are able to efficiently convey social signals during interactions. At the same time, we need to understand the impact of robots' behavior on the human brain. For this purpose, human behavioral and neural responses to the robot behavior should be quantified offering feedback on how to improve and adjust robot behavior. Under this premise, our approach is to use methods of experimental psychology and cognitive neuroscience to assess the human's reception of a robot in human-robot interaction protocols. As an example of this approach, we report an adaptation of a classical paradigm of experimental cognitive psychology to a naturalistic human-robot interaction scenario. We show the feasibility of such an approach with a validation pilot study, which demonstrated that our design yielded a similar pattern of data to what has been previously observed in experiments within the area of cognitive psychology. Our approach allows for addressing specific mechanisms of human cognition that are elicited during human-robot interaction, and thereby, in a longer-term perspective, it will allow for designing robots that are well-attuned to the workings of the human brain.
ABSTRACT
The Social Cognition in Human-Robot Interaction (S4HRI) research line at the Istituto Italiano di Tecnologia (IIT) applies methods from experimental psychology and cognitive neuroscience to human-robot interaction studies. With this approach, we maintain excellent experimental control, without losing ecological validity and generalisability, and thus we can provide reliable results informing about robot design that best evokes mechanisms of social cognition in the human interaction partner.
ABSTRACT
This workshop focuses on research in HRI using objective measures from social and cognitive neuroscience to provide guidelines for the design of robots well-tailored to the workings of the human brain. The aim is to present results from experimental studies in which human behavior and brain activity are measured during interactive protocols with robots. Discussion will focus on means to improve replicability and generalizability of experimental results in HRI.
ABSTRACT
Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.
Willemse, C., Marchesi, S., & Wykowska, A. (2018). Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability. Frontiers in Psychology, 9:70. doi: 10.3389/fpsyg.2018.00070
ABSTRACT
The iCub open-source humanoid robot child is a successful initiative supporting research in embodied artificial intelligence.
Natale, L., Bartolozzi, C., Pucci, D., Wykowska, A., & Metta, G. (2017). The not-yet-finished story of building a robot child. Science Robotics, Vol. 2, Issue 13, eaaq1026 DOI: 10.1126/scirobotics.aaq1026
ABSTRACT
Mutual gaze is a key element of human development, and constitutes an important factor in human interactions. In this study, we examined –through analysis of subjective reports– the influence of an online eye-contact of a humanoid robot on humans’ reception of the robot. To this end, we manipulated the robot gaze, i.e., mutual (social) gaze and neutral (non-social) gaze, throughout an experiment involving letter identification. Our results suggest that people are sensitive to the mutual gaze of an artificial agent, they feel more engaged with the robot when a mutual gaze is established, and eye-contact supports attributing human-like characteristics to the robot. These findings are relevant both to the human-robot interaction (HRI) research - enhancing social behavior of robots, and also for cognitive neuroscience - studying mechanisms of social cognition in relatively realistic social interactive scenarios.
Kompatsiari K., Tikhanoff V., Ciardo F., Metta G., & Wykowska A. (2017). The Importance of Mutual Gaze in Human-Robot Interaction. In: Kheddar A. et al. (eds) Social Robotics. ICSR 2017. Lecture Notes in Computer Science, vol 10652, Springer, 443-452. DOI: doi.org/10.1007/978-3-319-70022-9_44
ABSTRACT
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Wiese, E., Metta, G., & Wykowska, A. (2017). Robots as Intentional Agents: Using neuroscientific methods to make robots appear more social. Frontiers in Psychology, 8:1663, DOI: 10.3389/fpsyg.2017.01663