Loading…
Type: Oral session featured clear filter
Sunday, July 6
 

10:41 CEST

FO1: Hearing Music: A Shared Geometry Governs the Trade-off Between Reliability and Complexity in the Neural Code
Sunday July 6, 2025 10:41 - 11:10 CEST
Hearing Music: A Shared Geometry Governs the Trade-off Between Reliability and Complexity in the Neural Code

Pauline G. Mouawad∗1,Shievanie Sabesan1, Alinka E. Greasley2, Nicholas A. Lesica1
1The Ear Institute, University College London, London, UK
2School of Music, University of Leeds, Leeds, UK


*Email: p.mouawad@ucl.ac.uk





Introduction
Music is central to human culture, shaping social bonds and emotional well-being. Its unique ability to connect sensory processing with reward, emotion, and statistical learning makes it an ideal tool for studying auditory perception [1]. Previous studies have explored neural responses to speech and to simple musical sounds [2, 3], but the neural coding of complex music remains unexplored. We addressed this gap by analyzing multi-unit activity (MUA) recorded from the inferior colliculus (IC) of normal-hearing (NH) and hearing-impaired (HI) gerbils in response to a range of music types at multiple sound levels. The music types included individual stems (vocals, drums, bass, and other) as well as mixtures in which the stems were combined.
Methods
Using coherence analysis, we assessed how reliably music is encoded in the IC across repeated presentations of stimuli and the degree to which individual stems are distorted when presented in a mixture. To explore neural activity patterns at the network level, we implemented a manifold analysis using PCA. This identified the signal manifold, the subspace where reliable musical information is embedded. To model neural transformations underlying music encoding, we developed a deep neural network (DNN) capable of generating MUA from sound, providing a framework for interpreting how the IC processes music. Finally, to assess the impact of hearing loss, we conducted a comparative analysis for NH and HI at equal sound and sensation levels.
Results
We identified strong nonlinear interactions between stems, affecting both the reliability and geometry of neural coding. The reliability of the responses and the dimensionality of the signal manifold varied widely across music types. With increasing musical complexity, the dimensionality of the signal manifold increased, however the reliability decreased. The leading modes in the signal manifold were reliable and shared across all music types, but as musical complexity increased, new neural modes emerged, though these were increasingly unreliable (Figure 1). Our DNN successfully synthesized MUA from music with high fidelity. After hearing loss, neural coding was strongly distorted at equal sound level, but these distortions were largely corrected at equal sensation level.
Discussion

Music processing in the early auditory pathway involves nonlinear interactions that shape the neural representation in complex ways. The signal manifold contains a fixed set of leading modes that are invariant across music types. As music becomes more complex the manifold is not reconfigured; instead, new, less reliable modes are added. These new modes reflect a fundamental trade-off between fidelity and complexity in the neural code. The fact that suitable amplification restores near-normal neural coding suggests that mild-to-moderate hearing loss primarily affects audibility rather than the brainstem’s capacity to process music.
Figure 1. Complexity and Reliability in the Latent Space
Acknowledgements
Funding for this work was provided by the UK Medical Research Council through grant MR/W019787/1.
References
1. Patrik N Juslin and Daniel Västfjäll. “Emotional responses to music: The need to consider
underlying mechanisms”.https://doi.org/10.1017/S0140525X08005293.
2. Vani G Rajendran et al. “Midbrain adaptation may set the stage for the perception of musical
beat”. In: Proceedings of the Royal Society B: Biological Sciences 284.1866 (2017), p. 20171455.
https://doi.org/10.1098/rspb.2017.1455.
3. Shievanie Sabesan et al. “Large-scale electrophysiology and deep learning reveal
distorted neural signal dynamics after hearing loss”. In: Elife 12 (2023), e85108.
https://doi.org/10.7554/eLife.85108.


Sunday July 6, 2025 10:41 - 11:10 CEST
Auditorium - Plenary Room

14:01 CEST

FO2: Global brain dynamics modulates local scale-free neuronal activity
Sunday July 6, 2025 14:01 - 14:30 CEST
Global brain dynamics modulates local scale-free neuronal activity

Giovanni Rabuffo*1,2, Pietro Bozzo1, Marco Pompili1, Damien Depannemeacker1, Bach Nguyen2, Tomoki Fukai2, Pierpaolo Sorrentino1, Leonardo Dalla Porta3

1 Institut de Neurosciences des Systèmes (INS), Aix Marseille University, Marseille, France
2Okinawa Institute for Science and Technology (OIST), Okinawa, Japan
3Institute of Biomedical Investigations August Pi i Sunyer (IDIBAPS), Systems Neuroscience, Barcelona, Spain

*Email: giovanni.rabuffo@univ-amu.fr

Introduction

The brain's ability to balance stability and flexibility is thought to emerge from operating near a critical state [1]. In this work we address two major gaps of the “brain criticality hypothesis”:
First, local (between neurons) and global (between brain regions) criticality are often investigated independently, and a unifying framework is lacking.
Second, local neuronal populations do not maintain a strictly critical state but rather fluctuate around it [2]. The mechanisms underlying these fluctuations remain unclear.
To bridge these gaps, we introduce a connectome-based model that allows for a simultaneous assessment of local and global criticality (Fig.1). We demonstrate that long-range structural connectivity shapes global critical dynamics and drives the fluctuations of each brain region around a local critical state.
Methods
Decoupled brain regions are described by a mean-field model [3] which exhibits avalanche-like dynamics under stochastic input (Fig.1, Blue). Brain regions are connected via the Allen Mouse Connectome [4], and simulations are performed for different values of the global coupling parameter [5]. Simulated data consists of fast LFP, and slow BOLD signals (Fig.1, Red). The model results are validated against empirical datasets (Fig.1, Gray), including a mouse fMRI dataset [6] and LFP recordings from the Allen Neuropixel dataset [7]. To quantify the fluctuations around criticality, we identified neuronal avalanches as deviations of the local LFP signals below a fixed threshold (Fig.1, Blue) and measured sizes (area under curve) and durations (time to return within threshold). The magnitude of the fluctuations around criticality is assessed by analyzing the variance of the range of avalanche sizes across 2s-long epochs.
Results
For low global coupling, individual brain regions maintains local criticality (Fig.1, Blue) but remains globally desynchronized. Increasing coupling induces spontaneous long-range synchronization, paralleled by local fluctuations around criticality (Fig.1, Red). Notably, the working point where the simulations match the experiments corresponds to the regime with the largest range of avalanches sizes and durations (Fig.1, Grey). Strongly connected regions exhibit greater fluctuations around criticality, a testable prediction of the model. To verify this, we examined Allen Mouse Brain Atlas ROIs with LFP data and found a significant correlation between empirical critical fluctuations and regional structural connectivity properties (Fig.1, Green).
Discussion
Our results, comparing brain simulations and empirical datasets across scales, support the brain criticality hypothesis and suggest that criticality is not a static regime for a local neuronal population, but it is dynamically up- and down- regulated by large-scale interactions.



Figure 1. (Blue) Local neural mass model displays critical-like avalanche dynamics. (Red) Coupling brain regions via the empirical Allen structural connectivity we simulate fast LFP and slow BOLD global dynamics. (Gray) Simulated LFP displays global critical activity and simulated BOLD data matches fMRI experiments. (Green) The fluctuations around criticality correlate with structural in-strength.
Acknowledgements
We thank the Institut de Neurosciences des Systèmes (INS), Marseille, France, and the Okinawa Institute for Science and Technology, Japan for their generous support and sponsorship of this research. Their contributions have been instrumental in advancing our understanding of brain criticality and its implications.

References
[1] O’Byrne, J., & Jerbi, K. (2022) https://doi.org/10.1016/j.tins.2022.08.007
[2] Fontenele, A. J., et al. (2019) https://doi.org/10.1103/physrevlett.122.208101
[3] Buendía, V., et al., (2021) https://doi.org/10.1103/physrevresearch.3.023224
[4] Oh SW, et al. (2014) https://doi.org/10.1038/nature13186
[5] Melozzi F, et al. (2017) https://doi:10.1523/eneuro.0111-17.2017
[6] Grandjean, J., et al. (2023). https://doi.org/10.1038/s41593-023-01286-8
[7] https://allensdk.readthedocs.io/en/latest/visual_coding_neuropixels.html
Speakers
Sunday July 6, 2025 14:01 - 14:30 CEST
Auditorium - Plenary Room
 
Monday, July 7
 

10:41 CEST

FO3: Single-cell optogenetic perturbations reveal stimulus-dependent network interactions
Monday July 7, 2025 10:41 - 11:10 CEST
Single-cell optogenetic perturbations reveal stimulus-dependent network interactions

Deyue Kong*1, Joe Barreto2,Greg Bond2, Matthias Kaschube1, Benjamin Scholl2

1Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
2University of Colorado Anschutz Medical Campus, Department of Physiology and Biophysics, Aurora, Colorado, USA

*Email: kong@fias.uni-frankfurt.de


Introduction
Cortical computations arise through neuronal interactions and their dynamic reconfiguration in response to changing sensory contexts. Cortical interactions are proposed to engage distinct operational regimes that either amplify or suppress particular neuronal networks. A recent study in mouse primary visual cortex (V1) found competitive, suppressive interactions between nearby, similarly-tuned neurons, with exception of highly-correlated neuronal pairs showing facilitatory coupling [1]. It remains unclear whether such feature competition generalizes to cortical circuits with topographic organization, where neighboring neurons within columns exhibit similar tuning to visual features, and distal excitatory axons preferentially target similarly-tuned columns.
Methods
We investigated interactions between excitatory neurons in the ferret V1 and how network interactions depend on stimulus strength (contrast). We recorded the responses of layer 2/3 neurons to drifting gratings of eight directions at two contrast levels using 2-photon calcium imaging, while activating individual excitatory neurons with precise 2-photon optogenetics. We statistically quantified the effect of target photostimulation on neural activity (inferred spike rate) during visual stimulation using a Poisson generalized linear model (GLM). We then used our model to estimate a target’s influence on the surrounding neurons’ activity and their stimulus coding properties.
Results
Our analyses revealed interactions that depended on cortical distance, stimulus properties, and functional similarity between neuron pairs. Influence of photostimulated neurons strongly depended on cortical distance, but overall exhibited net suppression. Suppression was weakest between nearby neurons (<100µm), but was found across large cortical distances. Distant-dependent suppression was reduced when visual stimuli were low contrast. Examining functional-similar neurons, we found that noise correlations between neuron pairs were most predictive of measured interactions, showing a strong shift from amplification to competition: at low contrast, we observed local amplification between noise-correlated excitatory neurons, but increasing contrast led to a predominantly suppressive influence across all distances.
Discussion
Our data support predictions from theoretical models, such as stabilized supralinear networks (SSN), in which networks amplify weak feed-forward input, but sublinearly integrate strong inputs [2,3]. Furthermore, decoding analyses suggest that the contrast-dependent shift from facilitation to suppression correlates with improved decoding accuracy of direction. These findings demonstrate that stimulus contrast dynamically modulates recurrent interactions between excitatory neurons in ferret V1, likely by differentially engaging inhibitory neurons. Such dynamic modulation supports optimal encoding of sensory information within columnar cortices.




Acknowledgements

References
[1] Chettih, SN, Harvey, CD. Single-neuron perturbations reveal feature-specific competition in V1. Nature (2019).doi:10.1038/s41586-019-0997-6
[2] Rubin DB, Van Hooser SD, Miller KD. The stabilized supralinear network: a unifying circuit motif underlying multi-input integration in sensory cortex. Neuron(2015) doi: 10.1016/j.neuron.2014.12.026. PMID: 25611511; PMCID: PMC4344127.
[3] Heeger DJ, Zemlianova KO. A recurrent circuit implements normalization, simulating the dynamics of V1 activity. PNAS(2020). doi: 10.1073/pnas.2005417117. . PMID: 32843341; PMCID: PMC7486719.
Speakers
Monday July 7, 2025 10:41 - 11:10 CEST
Auditorium - Plenary Room

14:01 CEST

FO4: Automated identification of disease mechanisms in hiPSC-derived neuronal networks using simulation-based inference
Monday July 7, 2025 14:01 - 14:30 CEST
Automated identification of disease mechanisms in hiPSC-derived neuronal networks using simulation-based inference

Nina Doorn*1, Michel van1,2, Monica Frega3

1Department of Clinical Neurophysiology, University of Twente, Enschede, The Netherlands

2Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente, The Netherlands

3Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genova, Italy


*Email: n.doorn-1@utwente.nl


Introduction
Human induced pluripotent stem cells (hiPSCs)-derived neuronal networks on multi-electrode arrays (MEAs) are a powerful tool to study neurological disordersin vitro[1]. The electric activity patterns of these networks differ between healthy and patient-derived neurons, reflecting underlying pathology (Fig. 1A). However, elucidating the underlying molecular mechanisms is challenging and requiresextensive, costly, and hypothesis-driven additional experiments.Biophysical models can link observable network activity to underlying molecular mechanisms by estimating model parameters that simulate the experimental observations. However, parameter estimation in such models is difficult due to stochasticity, non-linearity, and parameter degeneracy.

Methods
Here, we address this challenge using simulation-based inference (SBI), a machine-learning approach that allows efficient statistical inference of biophysical model parameters using only simulations [2]. We apply SBI to our previously validated biophysical model of hiPSC-derived neuronal networks on MEA[3], which includesHodgkin-Huxley-type neurons and detailed synaptic models (Fig. 1B). To train SBI, we simulated 300,000 network configurations, varying key parameters governing synaptic and intrinsic neuronal properties (Fig. 1C). We used a neural density estimator to infer posterior distributions of these model parameters given experimental MEA recordings from healthy, pharmacologically treated, and patient-derived networks (Fig 1D).

Results
SBI accurately inferred ground-truth parameters in synthetic data and successfully identified known disease mechanisms in patient-derived neuronal networks. In networks from patients with the genetic epilepsies Dravet Syndrome and GEFS+, SBI predicted reduced sodium and potassium conductances and increased synaptic depression, which was experimentally verified. InCACNA1Ahaploinsufficient networks, SBI correctly identified impaired connectivity. Additionally, SBI detected drug-induced changes, such as prolonged synaptic depression following Dynasore treatment.
Discussion
SBI enables automated and probabilistic inference of biophysical parameters, offering advantages over traditional parameter estimation methods, which can be time-consuming, lack uncertainty quantification, or cannot deal with parameter degeneracy. Our results show how SBI can be used with biophysical models to identify possible disease mechanisms from patient-derived neuronal data. Ourproposed analysis pipeline enables researchers to extract crucial mechanistic information from MEA measurements in a systematic, cost-effective, and rapid manner, paving the way for targeted experiments and novel insights into disease.






Figure 1. Figure 1. A) The activity of in vitro neuronal networks cultured from hiPSCs of healthy controls and patients is measured using MEAs. B) The computational model with biophysical parameters in blue. C) A Neural density estimator is trained on model simulations. Afterward, experimental data is passed through the estimator to approximate the D) posterior distributions. Adapted from [4].
Acknowledgements
This work was supported by the Netherlands Organisation for Health Research and Development ZonMW; BRAINMODEL PSIDER program 10250022110003 (to M.F.). We thank Eline van Hugte, Marina Hommersom, and Nael Nadif Kasri for providing MEA recordings from patient-derived and genome-editedin vitroneuronal networks.
References
● 1.https://doi.org/10.1016/J.STEMCR.2021.07.001
● 2.https://doi.org/10.7554/ELIFE.56261
● 3.https://doi.org/10.1016/J.STEMCR.2024.09.001
● 4.https://doi.org/10.1101/2024.05.23.595522


Speakers
Monday July 7, 2025 14:01 - 14:30 CEST
Auditorium - Plenary Room
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.