Poster Abstracts:
Oliver Maith, Erik Syniawa & Fred Hamker
Chemnitz University of Technology
A Neurocomputational Model of Basal Ganglia-Intralaminar Nuclei Interactions: Implications for Attentional Orienting and Sense of Agency
The sense of agency, the feeling of control over our actions and their outcomes, is a fundamental aspect of human experience. While it is traditionally associated with higher-order cognitive processes, it would be naïve to posit that the sense of agency is based purely on top-down cortical mechanisms. Subcortically, the basal ganglia and intralaminar thalamic nuclei, in particular the centro median-parafascicular complex (CM-Pf), form intricate loops with the cortex that are crucial for attentional orienting. However, the exact computational mechanisms underlying these interactions are still unclear.
To elucidate these mechanisms, we present a neurocomputational model of a cortico-basal ganglia-thalamic network that replicates Minamimoto and Kimura’s (2002) findings on attentional orienting. As in their empirical results, our model shows that CM-Pf activity patterns are a correlate between predictable and unpredictable sensory events. The attentional shifts mediated by CM-Pf also provide a mechanistic explanation for how we can distinguish our generated (predictable) actions from the environment-induced (unpredictable) events. This would be an important mechanism in the formation of association between our actions and their outcomes.
Keywords: Basal ganglia, CM-Pf complex, bottom-up attention, prediction error, neurocomputational model, sense of agency
Erik Syniawa & Fred Hamker
Chemnitz University of Technology
Neurocomputational Modeling of Sensorimotor Integration via Cortico-Basal Ganglia-Thalamic Loops
This study presents a novel neurocomputational model that simulates the development of sensorimotor coordination through the interaction of cortico-basal ganglia-thalamic loops, with a focus on the centro-median nuclei of the intralaminar thalamus. The model implements a motor babbling mechanism to mimic infant-like exploratory movements, facilitating the formation of associations between actions and outcomes. By incorporating recurrent loops between the cortex, basal ganglia, and thalamus, we demonstrate how the basal ganglia contribute not only to action selection but also to the continuous specification and modulation of movement parameters during execution. This approach underpins a modern standpoint of the basal ganglia in a more dynamic role in movements (Park et al., 2020). Furthermore, our model elucidates how embodiment and sensorimotor integration can occur at the subcortical level, providing insights into the neural mechanisms underlying the development of motor skills.
Keywords: Basal ganglia, intralaminar thalamus, sensorimotor integration, motor learning, embodiment
Nils Wendel Heinrich
Lübeck University
Does Anticipating Control Loss prompt Shifts in Action Selection with the Purpose of Maintaining Agency?
Agents can anticipate situations in which their agency experience may be diminished. The exclusivity criterion of the postdictive account of the Sense of Agency specifically relates to perceiving no other potential factors that might have caused an action except the agent’s own action control. Correctly projecting the environmental dynamics is a key ability to be able to anticipate external factors that might lead to not meeting the exclusivity criterion. In our Dodge Asteroids experimental environment, participants steer a spaceship situated in a corridor filled with obstacles. They are tasked to reach the end of the corridor without crashing while their gaze is being tracked. Participants encounter drift sections, indicated by a red bar, which cause a horizontal shift in the spaceship’s movement. We will explore when participants are able to anticipate drift situations implied by fixations allocated within drift sections before the spaceship enters said sections. Additionally, we investigate whether participants select actions for which it is easier to differentiate between horizontal movement due to drift and their own steering, thereby increasing the likelihood of meeting the exclusivity criterion. Our findings may deepen our understanding about the interaction of situational awareness, the Sense of Agency, and anticipatory action control.
Josua Spisak
Hamburg University
End-to-end Human to Robot Imitation.
The capabilities of humanoid robots are continuously improving, with new skills ranging from dancing to pouring being constantly added. While these abilities can be highly impressive, the teaching methods often remain inefficient. To enhance the process of teaching robots, we propose leveraging a mechanism widely observed in the natural world: Learning from demonstrations. In this poster, a diffusion architecture that allows a robot to imitate a human demonstrator will be presented.
Sara Mohammadi
Bielefeld University
Emergence of ownership and sense of agency are associated with changes in body schema and peripersonal space during active tool-use training through lifespan
Christoph Schneider
Justus-Liebig-Universität Gießen
Bayesian inference as the basis of sense of agency
The sense of agency, which is the attribution of an action and its outcome to oneself versus outside causes like other agents, is crucial for interacting with the world. It relies on predictions about the sensory consequences of our own actions based on motor representations, so-called reafference or forward-model mechanisms. These mechanisms help the organism to distinguish self-generated sensory signals from those sensory signals that are triggered by external stimulation. Changes to our bodies and within our environments require constant adaptation of behavior, even without awareness. However, it is poorly understood how a stable sense of agency is generated and maintained within such an ever-changing environment.
In this study, we propose to utilize a Bayesian observer model that quantifies sensory observations of oneself and another agent as likelihoods, to explain the formation of sense of agency within the logic of Bayesian inference. More specifically, we sought to relate sensorimotor performance in goal-directed reaching movements to the model’s likelihood parameters for self versus other.
To this end, participants (n=25) performed goal-directed movements to hit a puck towards a target within a virtual air hockey game, in which their actions could be perturbed by another simulated agent. In different phases of the experiment, participants were either asked to hit specific targets, predict the action outcomes of themselves or the other agent, or to provide agency judgments along with confidence ratings of their judgments.
Preliminary results show the sense of agency to depend on the observedaction outcomes: In the absence of perturbations by the other agent, participants considered themselves to be the agent of the observed action with high confidence. Both self-agency and confidence ratings diminish with increasing perturbation magnitude. The Bayesian observer model correctly predicts 82.5% (SD 4.8%) of the agency judgments of participants supporting Bayesian inference as a viable theoretical framework to explain the formation of a stable sense of agency in a highly dynamic and ever-changing world.
Amir Jahanian Najafabadi, Christoph Kayser
Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
Modulating Body and Space Representations During Tool-use Training in Virtual Reality
Previous studies, including our prior research (Jahanian Najafabadi et al., 2023a), have highlighted the significant role of visual feedback in tool embodiment. While an increased sense of agency has been observed as a result of practice effects during virtual tool-use training, this did not extend to ownership. Furthermore, visual feedback has been shown to strongly influence practice effects, contingent on performance level and the integration of the virtual tool into the arm’s representation. To build upon and expand previous findings, we replicated and extended this research to better understand the extent to which visual feedback contributes to the plasticity of body and space representations. We also explored the association between these changes and the emergence of ownership and agency in young adults. Sixty-eight right-handed participants (aged 18-35) underwent virtual tool-use training, learning to control a virtual mechanical grabber in four training blocks across two days. Participants grasped and moved virtual objects toward their bodies in a series of trials (30, 60, 90, 120), conducted at a distance of 120 cm relative to their body. Before and after tool-use, we assessed tactile distance judgment, ownership, agency, and virtual distance perception. The results of this study raise questions about the degree and robustness of the plasticity of body and space representations influenced by visual feedback and address a key methodological gap in the literature.
Keywords: Tool-use, Visual Feedback, Body & Space Representations, Virtual Reality
Valentin Forch
Technische Universität Chemnitz
A neuro-computational model of the rubber hand illusion
The representation of a unified body schema is one of the core tenets of research into the self. While it seems clear that the brain integrates multisensory information and there is ample evidence for the nervous system performing close-to-optimal statistical inference, the mechanistic basis of the processes giving rise to the percept of one’s own body is less well understood. In our work, we developed a biologically grounded model for body schema learning, which entails the integration and alignment of different modalities. We apply our model in the rubber hand illusion setting, where a fake rubber hand is perceived as part of one’s own body. Our computational model uses visual retinocentric and proprioceptive inputs to compute a head-centered representation of the limb. Learning occurs in an unsupervised manner, driven by local Hebbian learning rules that enable the network to adjust synaptic weights based on the correlations between sensory inputs. Through simulations, we demonstrate that our model can replicate key aspects of the rubber hand illusion, such as the the spatial displacement of perceived limb position toward the visual stimulus. These results suggest that the brain can dynamically learn and update body representations based on multisensory input without supervision
Johanna Theuer
Universität Tübingen
Modeling event-predictive crossmodal anticipations
Abstract: Our minds segment our continuous sensorimotor experience into events to structure and interpret it, including anticipating next event boundaries and following events. Sensory information from several modalities is integrated and supports projecting sensory observations into the future as event anticipations. This is shown by the anticipatory Cross-Modal Congruency Effect (aCCE). When a finger is stimulated and a distractor light shown while reaching for an object, before the hand makes contact, incongruent stimulation leads to longer verbal response times in identifying the finger than congruent stimuli. Visual and tactile information may be integrated and the position of the hand projected into the future, to the boundary of one event (‘reaching’) to the next (‘grasping’). This effect has been previously shown in Virtual Reality studies and is now investigated in the real world. Besides an experimental study, we develop a computational model focusing on the inference processes underlying this effect. The model is based on event schemata and Bayesian inference, the updating of prior to posterior distributions integrating sensory evidence and previous information of the generative model. It shows how event anticipations may develop given limited cognitive resources, and may give insight into the dynamics of resource-efficient prevent-predictive cognition beyond the aCCE.
Johannes Heidersberger
Technische Universität Wien
Learning to collaborate in a precision task with interpersonal haptic interaction
Understanding human collaboration behavior in tasks with physical interaction is essential for developing approaches for physical human-robot collaboration (HRC). This study investigates learning behavior in a haptic human-human collaboration task with continuous high precision requirements. We examine the extent to which learning of collaboration behavior is partner-specific, how collaborators learn to collaborate by adapting to their partner and by increasing their predictability, and the influence of partner differences on the collaboration. To study these research questions, we employ a collaborative hot wire task, a high-precision manipulation task with interpersonal haptic interaction, in which an object is moved along a predefined path. Results show that although participants perform worse during collaboration compared to solo execution, they learn partner-specific collaboration knowledge over repeated collaboration. Additionally, with repeated collaboration the motion and force profiles become less variable, which increases predictability and consequently facilitates task performance improvement. Adapting to their partner by increasing the similarity of their movements allows the pair to further improve their collaborative performance. Furthermore, individual performance improvement through collaboration depends on the relative proficiency of the individual collaborators. Participants show greater improvement when paired with a partner whose solo performance exceeds their own than when paired with a lower-performing partner.
Firuza Rahimova
Humboldt universität
Modelling Three Dot Task of Agency in Pepper Simulation
In the Three-Dots-Task of Sense of Agency (Wen, Haggard, 2020), participants identify the dot they are in control of among three, two of which move randomly. For Pepper to perform this task, a simulation environment is developed on PyBullet, the movement of the right hand of virtual Pepper corresponds to one of the dots on the virtual screen.
Random exploration data is collected from Pepper in this environment in One-Dot-Full-Control condition, joint positions of the right arm, motor commands and visual information at time t is fed into a CNN to develop a forward model to generate visual predictions for time t+1. We expect the prediction error – the difference between observed and predicted dot positions to be minimum for the dot in control during the test phase. (similar to Lang et. al, 2018).
In the next phase of this project, we are planning to add a parametric bias unit to the NN (similar to Idei et al., 2016) to model lower/higher prior precisions in predictive coding framework – which might be behind agency disturbances in schizophrenia patients (Sterzer et al, 2016), who also perform worse compared to healthy controls in Three-Dot-Task (Oi et al., 2023, preprint).
Markus R. Tünte
Cardio-Ocular Coupling in Infancy
Recent empirical results in adults have demonstrated that eye-movements are coupled with the cardiac cycle. Using an active sampling paradigm, Galvez-Pol et al. (2020) demonstrated that more saccades were generated during systole, in the first part of the cardiac cycle, while more fixations and blinks were generated during diastole, the second part of the cardiac cycle. A coupling of eye-movements and cardiac cycle might be helpful in aligning cardiac and visual input which in turn might be beneficial to react appropriately in chaotic environments. However, we do not know whether a coupling between cardiac cycle and eye-movements is present already early in life. Here, we present a secondary data analysis of data from a recent study investigating interoceptive sensitivity in 3-, 9-, and 18-month-old infants (Tünte et al. 2023). We investigated whether i) there is a unimodal peak of eye-movements (blinks, saccades, fixations) across the cardiac cycle, ii) more eye-movements are generated during systole or diastole, and iii) whether fixations are longer if initiated in systole or diastole. Our confirmatory analysis finds no evidence for i) a unimodal peak of eye movements across the cardiac cycle. However, we find that ii) there are more fixations and saccades generated during systole, but no difference regarding blinks. Last, in an exploratory analysis we find that iii) fixations are longer if initiated in diastole. Our results are the first to demonstrate cardio-ocular coupling early in human life and provide an important empirical basis to explore how the cardiac cycle is impacting early visual development.
Gianluigi Giannini
“We are not only passively immersed in a sensorial world, but we are active agents that directly produce stimulations. Understanding what’s special about the sensory consequences that oneself produces can give valuable insight into the action-perception-cycle. Sensory attenuation is the phenomenon that self-produced stimulations are perceived as less intense compared to externally-generated ones. Studying this phenomenon, however, requires considering a plethora of factors that could otherwise interfere with its interpretation, such as differences in stimulus properties, spatial attentional or temporal predictability. We therefore developed a novel Virtual Reality (VR) setup that might allow to control several of these confounding factors. Further, we modulated the expectation of receiving a stimulation across self-production and passive-perception through a simple probabilistic learning task, allowing us testing to what extent the electrophysiological correlates of sensory attenuation are impacted by stimulus expectation. We obtained electroencephalography (EEG) recordings of 26 participants. Results indicate that early (P100), mid-latency (P200) and later negative contralateral potentials were significantly attenuated by self-generated sensations, independently of the stimulus expectation. Moreover, one component at around 200 ms post stimulus at frontal sites was found to be enhanced for self-produced stimuli. The P300 resulted to be influenced by stimulus expectation, independently whether actively produced or passively attended the stimulation. Together our results indicate that VR opens up new possibilities to study sensory attenuation in more ecological yet well-controlled paradigms and that sensory attenuation is not significantly modulated by stimulus predictability.”
