Movement Matters – A Turing Test for Robot Interaction
Dr. Pablo Lanillos

Affiliation: Spanish National Research Council
Website: https://neuro-ai-robotics.github.io/
Short Bio: Pablo Lanillos leads the Neuro AI and Robotics group at the Spanish National Research Council and co-affiliates with the Donders Institute for Cognition in the Netherlands. His team develops neuroscience-inspired artificial intelligence algorithms for achieving human-like perception and action in robots. His leitmotif is to transform our understanding of human cognition into technologies of the future. He completed his doctoral studies in Computer Engineering at the Complutense University of Madrid, got the Marie-Skłodowska Curie award at the Technical University of Munich, and gained tenure as Assistant Professor at Radboud University Nijmegen.
Talk Title: “I, Robot” an Embodied Turing Test
Abstract: We decided to build a robotic self to understand the principles of embodied intelligence. Our starting point was investigating how the brain perceives and controls the body, and our endpoint is achieving physical intelligence. By isolating the computational components that lie behind self-construction with a body, we replicated different classical human experiments, such as the rubber-hand illusion and the mirror self-recognition in robots. But the key to the ultimate embodied Turing test is interaction; a test where the “I” participates as a main ingredient for goal-directed actions and problem-solving.
Prof. Agnieszka Wykowska

Affiliation: Italian Institute of Technology (IIT)
Website: https://www.iit.it/people/agnieszka-wykowska
Short Bio: Agnieszka Wykowska is a Principal Investigator at the Italian Institute of Technology (IIT) and leads the Social Cognition in Human-Robot Interaction Lab. Her research focuses on understanding human cognitive mechanisms in interaction with artificial agents, using neuroscience methods to investigate how people perceive and attribute intentionality to robots. She obtained her Ph.D. in Psychology from Ludwig Maximilian University of Munich and has held research positions at leading European institutions in cognitive science and robotics.
Talk Title: Joint action with a humanoid: how robot’s behavioural human-likeness affects human cognitive mechanisms
Abstract: In daily life we do not act in a social vacuum, but often perform tasks together with others, a social activity termed “joint action”. In cognitive neuroscience, joint actions are defined as “any form of social interaction whereby two or more individuals coordinate their actions in space and time to bring about a change in the environment” (Sebanz et al., 2006). Various cognitive mechanisms are involved when we engage in joint actions with others, for example, shared representations, sensorimotor coordination, or goal sharing. In my talk, I will present results from three different studies in which participants were engaged in a joint action with the humanoid robot iCub. The studies showed how human-likeness of iCub’s behaviour affected various socio-cognitive mechanisms of the human, namely, their behavioural variability (signalling coordination); sense of joint agency with the robot; and eventually, attribution of humanness to the robot (measured in a nonverbal Turing test). Together, these results demonstrate that the human brain is highly sensitive to even subtle human-like behaviours of a robot. Therefore, to achieve intuitive and human-like interaction, the robots need to exhibit human-like behaviour at various levels.
Prof. Minha Lee

Affiliation: Eindhoven University of Technology (TU/e)
Website: https://www.tue.nl/en
Short Bio: Minha Lee is an Assistant Professor in the Department of Industrial Design at Eindhoven University of Technology (TU/e), where she investigates ethical and emotional aspects of human interaction with artificial agents. Her work explores how design can shape human perceptions of AI and robots, with a focus on moral emotions, transparency, and empathy in social technologies. She combines experimental methods with design research to examine the relational and philosophical dimensions of human-AI encounters. Prof. Lee holds a Ph.D. in Philosophy and has an interdisciplinary background spanning design, cognitive science, and ethics.
Talk Title: Movement matters? Information overload in HRI
Abstract: We attribute “minds” to robots not only through what they say, but also through how they move. Verbal reasoning and transparency cues may signal competence, yet perceptions of warmth and trust often hinge on non-verbal behaviors such as gaze, timing, and motion. Our study of human–robot moral debates found that while added transparency information increased competence ratings, shifts in intentionality and trust were more strongly influenced by embodied cues. This brings up a tension: Movement is vital for conveying intelligence, but verbal and non-verbal behaviors can overwhelm or bias users, reducing clarity rather than enhancing it. Reaching a balance between cognitive and affective signals that mix modalities, rather than simply providing more information, is an increasing challenge to address.
