Tübingen
SNS2025

Abstracts

Language at a glance: How do our brains order syntactic and semantic computations when no temporal order is imposed from the input?

Liina Pylkkänen, New York University

Language can be expressed and received through various modalities, such as sound, sight, and touch. Each modality has its own temporal dynamics, including different degrees of serialism and parallelism. In neurobiological research on language, we are limited by the fact that for each study, we must choose some specific modality, making it difficult to assess general properties of language versus modality-specific dynamics. Comparing modalities directly is slow and methodologically limited. In our current work, we approach this question from a different angle and ask: How does the language system organize itself when the input lacks temporal sequencing, allowing the brain to order computations in whatever way is natural? We study this using rapid parallel visual presentation (RPVP), where full sentences are flashed all at once, like notifications on a device. Using magnetoencephalography (MEG), we examine how syntax and semantics are computed from such parallel input. Initial results show that grammatical sentences, compared to unstructured input, increase neural activity in left fronto-temporal cortex beginning ~200 ms after onset. By varying linguistic properties, we aim to reveal the spatiotemporal organization of language computations when input imposes no sequence—providing a measure of how input maps to knowledge without predictive temporal context.

Timescale-invariant processing of durational information in speech

Mara Wolter, University of Tübingen

Accurate perception of speech relies on precise extraction and interpretation of durational information. In German, for instance, a 10 ms difference in vowel duration changes word meaning (short /a/, ‘satt’ - ‘full’ vs. long /aː/, ‘Saat’ - ‘seed’). Temporal information, however, must be interpreted in context. First, the duration ranges for categorizing short vs. long depend on a vowel’s spectral content. Second, duration is perceived relative to the surrounding speech rate - for example, when a voice message is played at double speed, all vowels are shortened, yet understood correctly.

We used Magnetoencephalography to investigate how vowel duration and contextual information are integrated. Behaviorally, both vowel spectral content and contextual speech rate affected categorization of absolute vowel durations as ‘short’ or ‘long’. Neurally, we find larger vowel offset responses for shorter vowels in auditory regions, and sustained representation of contextual speech rate even in speech pauses. Crucially, the effects of vowel duration and rate overlapped both in space (within auditory and inferior frontal regions) and time, with maximal overlap ~230 ms after vowel offset.

Together, our results show that durational information is encoded in response magnitudes in auditory speech cortices, with contextualized representations dominating cortical processing of durational information in speech.

Primate ACC encodes natural vocal turn taking in a "cocktail party"

Arthur Lefevre, University of Lyon

The Cocktail Party Effect (CPE) is amongst the most pervasive real-world challenge for communication. How the brain resolves this issue remains poorly understood, because of the scarcity of neurobiological studies examining the CPE in naturalistic environments where the phenomenon was first described. Instead, research has relied on controlled psychoacoustic paradigms, which may overlook brain computations that only emerge in natural contexts.

We hypothesized that the Anterior Cingulate Cortex (ACC), a region involved in vocalization production and perception, as well as social monitoring, is encoding the CPE and thus critical to natural communication. To test this, we studied interacting pairs of marmosets freely behaving in complex social environments, while recording neurons in their ACC wirelessly.

We found that in addition to encoding vocalizations at the unit level, ACC neuronal population could be used to decode vocalization type. Furthermore, analyses revealed that ACC neurons exhibit mechanisms integral to resolving the CPE, including parsing conversational partners from other callers in the environment and encoding turn-taking dynamics.

These results highlight the need to reconsider the role of the ACC in acoustic communication. We suggest that ACC regulates dynamic vocal interactions in complex socio-acoustic environments by encoding call types, caller identity and resolving the CPE.

Neural variability in the medial prefrontal cortex contributes to efficient adaptive behavior

Etienne Koechlin, Ecole Normale Supérieure de Paris

Abstract coming soon

The retinal connectome from Eyewire II: Towards understanding cell types and circuits with a large-scale EM dataset

Simone Ebert and Jan Lause, Tübingen

Abstract coming soon

The impact of motor behaviour and internal states on subcortical visual processing

Sylvia Schröder, University of Sussex

The early visual system is thought to efficiently encode visual input, e.g., by adapting to recent stimulus history. But does it also adapt to an animal’s ongoing behaviour like locomotion, which alters the expected stimulus statistics?

Using two-photon calcium imaging, we recorded activity from neurons in the superior colliculus (SC) and from retinal axons in awake mice. Locomotion modulated activity in roughly half of each population. Notably, its influence depended on the stimulus: for motion direction tuning, locomotion induced only linear changes in gain and offset, whereas for motion speed tuning, it systematically shifted neuronal preferences towards higher speeds.

In separate experiments, we examined the temporal dynamics of neural responses to brief visual stimuli using high-density electrophysiology. We observed widespread visual adaptation in the spiking activity of SC neurons and retinal axons projecting to the SC. Locomotion, on average, strengthened visual adaptation in both retinal axons and SC neurons, and accelerated adaptation in SC neurons.

These results demonstrate that behavioural state profoundly shapes visual processing at very early stages of the visual pathway. During locomotion, neurons appear better adapted to encode fast motion and rapid changes in visual stimuli, suggesting an optimization of early visual encoding for behaviourally relevant contexts.

Where next after a decade of ANN models of primate vision? From object categorisation to LLM embeddings and on to self-supervised gaze prediction.

Tim Kietzmann, University of Osnabrück

The human visual system extracts structured representations of scenes through a dynamic, active sampling process. While prior computational models, particularly those trained on object recognition or scene caption embeddings, exhibit promising alignment with cortical responses, they fall short of explaining how such structure emerges developmentally and without excessive supervision. In this project, we introduce a self-supervised learning framework inspired by the mechanisms of natural scene exploration in the brain. The resulting Glimpse Prediction Networks (GPNs) predict future visual input embeddings in a saccade-contingent manner, thus leveraging the sequential and anticipatory nature of active vision as a training signal. Two results stand out: First, GPN models surpass state-of-the-art alignment with fMRI visual cortical responses (as estimated on NSD). Second, their ability to align with the neural data arises from their learned, internal embedding space that enables them to transform an input embedding into a subsequent sensory prediction. These findings offer a compelling model of ventral stream representations, suggesting that representations of natural scenes may be organized around the demands of predicting future input during active vision.

Predictive computations in the human brain

Floris de Lange, Radboud University Nijmegen

We perceive the world by rapidly transforming streams of meaningless sensory signals into meaningful tokens, such as hearing the word ‘brain’ or seeing an apple. How can our brain do this so quickly, efficiently, and robustly? The key that may unlock this ability is prediction: The brain is constantly forming predictions of its input, which are compared with incoming information to update predictions, in a virtuous cycle. In my talk, I will discuss how the brain may implement prediction, drawing on behavioral and neuroimaging evidence. I will also discuss recent work using neuroscience-inspired AI algorithms (Artificial Neural Networks, ANNs), as models of neural information processing to understand predictive processing in more natural worlds.

Roles for visual cortex in high-capacity, one-shot learning

Marlene Cohen, University of Chicago

I will discuss recent work from my lab aimed at uncovering the neural basis of one-shot, high-capacity image recognition memory. Using a recognition task in which monkeys view natural images revealed piece by piece, we compared population responses in visual area V4 during successful and unsuccessful memory. This design allowed us to separate the dynamics of encoding and retrieval while animals judged whether each image was novel or familiar. We evaluated several proposed coding schemes, including magnitude coding, repetition suppression, sparse coding, and trial-by-trial population response consistency. The strongest predictors of memory performance were response consistency and accelerated response dynamics during familiar trials, signatures of neural population activity typically associated with the hippocampus. These results suggest that mid-level visual cortex reflects memory-related computations earlier in the processing hierarchy than previously appreciated. I will also discuss our preliminary hypotheses about a neural mechanism that could support memory encoding and retrieval in high capacity tasks, and show early evidence that this process is disrupted in a monkey model of Alzheimer's disease.

Building an idea from its parts

Christopher Summerfield, Oxford University

Humans can assemble knowledge into new configurations. How they do this is a mystery, but it probably involves both hippocampus and neocortex. I will talk about behaviour, modelling and imaging work that tackles this question. I will describe one project in which people learn to stitch together transitive series (ordinal rankings of objects) that have been learned separately. I will also discuss new work in which people combine symbolic knowledge to solve new problems. These projects suggest distinct roles for hippocampus and neocortex in this function.

Modular and distributed computations of decisions and actions

Valerio Mante, University of Zürich

Abstract coming soon

Sensorimotor transformation of number in the primate parietal cortex

Laura Seidler, University of Tübingen

The ability to transform perceived numerical values into corresponding numbers of actions is central to numerical cognition, yet the underlying neural mechanisms remain unclear. We examined this process in rhesus macaques trained to perform a manual counting task, in which visual numerical cues instructed the production of a matching number of hand movements. Single-neuron recordings from the ventral intraparietal area (VIP), a region implicated in numerosity processing, revealed tuning to the number of intended actions during motor planning. VIP neurons exhibited both sustained and transient response profiles, consistent with static and dynamic coding schemes that support sensorimotor transformation. Population decoding analyses confirmed that VIP activity encoded intended action number and captured systematic overestimation and underestimation errors in behavior. These findings demonstrate that VIP neurons bridge numerical perception and motor planning, providing a neural substrate for converting abstract numerical input into goal-directed action.

Dynamical constraints on neural population activity

Emily Oby, Queen's University

The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain’s computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.

Neuroendocrine and circadian effects of light in humans: Mechanisms and translation

Manuel Spitschan, Max Planck Institute for Biological Cybernetics, Tübingen

Abstract coming soon

Engrams as substrates for long-term information storage

Tomás Ryan, Trinity College Dublin

Abstract coming soon

Infants sleep to form lasting memory

Marion Inostroza, University of Tübingen

Infants learn rapidly, yet memories from early life are typically not accessible later, a phenomenon known as infantile amnesia. This phenomenon has long been attributed to the immaturity of neural memory systems. Our research challenges this view. We show that early-life experiences can persist into adulthood, but only when followed by sleep, suggesting that sleep plays a crucial role in stabilizing memory during development. In rodent models, we found that memories formed in infancy are not stored in hippocampal circuits, but in the medial prefrontal cortex. This discovery reveals a previously unrecognized role for prefrontal networks in long-term memory storage during early development. Importantly, some of these early memories remain latent, not yet expressed at the behavioral level, but can later enhance cognitive performance, demonstrating their lasting functional relevance. Our recent findings show that several mechanisms underlying sleep-dependent memory consolidation are already present in infancy. Slow oscillation–spindle coupling, a hallmark of adult non-REM sleep, appears early, while other features, such as precise spindle–ripple coupling, are not yet fully developed. Despite this immaturity, memory formation still occurs, supported by partially mature but effective oscillatory dynamics.

Together, these findings suggest that the infant brain is not simply immature, but functions as a specialized system for abstraction and long-term integration of experience, with sleep acting as a critical condition for lasting memory formation.

(c) 2025 SNS organizers.

Imprint: FG Siegel, MEG Center, Universitätsklinikum Tübingen, Otfried-Müller-Straße 47, 72076 Tübingen, Deutschland.