Loading…
Type: Posters clear filter
arrow_back View All Dates
Sunday, July 6
 

17:20 CEST

P002: Understanding aging in terms of memory: Beyond excitation-inhibition balance
Sunday July 6, 2025 17:20 - 19:20 CEST
P002 Understanding aging in terms of memory: Beyond excitation-inhibition balance

Srishty Aggarwal*1

1Department of Physics, Indian Institute of Science, Bangalore, India, 560012

*Email: srishtya@iisc.ac.in


Introduction
Recently, non-linear dynamic techniques like Higuchi’s fractal dimension (HFD) have gained prominence to understand neural complexity.We previously demonstrated that HFD increased with aging and was inversely dependent on changes in power and slope of power spectral density (PSD)[1]. However, findings regarding changes in HFD with aging are inconsistent in literature[1],[2], leading to their ambiguous interpretability in neural mechanisms. Moreover, while age-related reduction in PSD slope and power in the gamma band (30-70 Hz) showed a shift towards lesser inhibition with aging[3], the reason for the slowing down of center frequency of gamma with aging is not clear. These emphasize on the need for a theoretical model that extends beyond excitation-inhibition (E-I) balance to explain HFD and aging.

Methods
We propose a two-parameter model based on stochastic fractional differentiation, that exhibits power-law scaling and long-range dependencies, the important characteristics of neurophysiological signals. In this model, one parameter governs E-I balance, while the other, ‘the order of differentiation’, captures the influence of past states. The decrease in order of differentiation indicates an increased weightage to the past memory states, which could be the effect of change in long-term plasticity.
Results

The model shows that the order of differentiation is inversely related to HFD. Thus, the previously observed increase in HFD with aging is due to greater memory accumulation over time in elderly population. Further, itdepicts that the memory accumulation, not just the change in E-I balance, is the primary reason for the age-related reduction in stimulus-induced gamma power, decrease in gamma center frequency[3], and flattening of spectral slopes at low frequency[4].Our model successfully accounts for the observed changes in HFD across different stimulus conditions, including transients and sustained oscillations. It also reproduces the observed dependence of HFD on both peak power and spectral slope. Additionally, it offers a unified framework that simultaneously captures changes in oscillatory peaks and slopes showing its advancement over previous models that typically address only one of these aspects.

Discussion

The presentmodel highlights the presence of two components of neural activity: memory and E-I balance. By demonstrating that these components contributedifferentlyto brain dynamics, our findings provide a new perspective on how neural complexity evolves with aging and stimulus-driven processes.The model’s simplicity in terms of its parameter space and ability to explain a wide range of empirical findings makes it a promising framework for unravelling the intricate mechanisms of brain function.




Acknowledgements
I would like to thank my advisors Prof. Banibrata Mukhopadhyay, Department of Physics and Prof. Supratim Ray, Centre for Neuroscience for useful discussions and comments for the present work.
References
[1] S. Aggarwal and S. Ray, Jun. 16, 2024,bioRxiv. doi: 10.1101/2024.06.15.599168.
[2] F. M. Smits, C. Porcaro, C. Cottone, A. Cancelli, P. M. Rossini, and F. Tecchio,PLOS ONE, vol. 11, no. 2, p. e0149587, Feb. 2016, doi: 10.1371/journal.pone.0149587.
[3] D. V. P. S. Murtyet al.,NeuroImage, vol. 215, p. 116826, Jul. 2020, doi: 10.1016/j.neuroimage.2020.116826.
[4] S. Aggarwal and S. Ray,Cerebral Cortex Communications, vol. 4, no. 2, p. tgad011, May 2023, doi: 10.1093/texcom/tgad011.


Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P003: Digital Twins Enable Early Alzheimer’s Disease Diagnosis by Reconstructing Neurodegeneration Levels from Non-Invasive Recordings
Sunday July 6, 2025 17:20 - 19:20 CEST
P003 Digital Twins Enable Early Alzheimer’s Disease Diagnosis by Reconstructing Neurodegeneration Levels from Non-Invasive Recordings

Lorenzo Gaetano Amato1,2*, Michael Lassi1,2, Alberto Arturo Vergani1,2, Jacopo Carpaneto1,2, Valentina Moschini3, Giulia Giacomucci4, Benedetta Nacmias4,5, Sandro Sorbi4,5, Antonello Grippo4, Valentina Bessi4, Alberto Mazzoni1,2

1The BioRobotics Institute, Sant’Anna School of Advanced Studies, Piazza Martiri della Libertà 33, 56127, Pisa, Italy
2Department of Excellence in Robotics and AI, Sant’Anna School of Advanced Studies, Pisa, Italy, Piazza Martiri della Libertà 33, 56127, Pisa, Italy
3Skeletal Muscles and Sensory Organs Department, Careggi University Hospital,Largo Brambilla 3, 50134,Florence, Italy
4Department of Neuroscience, Psychology, Drug Research and Child Health, Careggi University Hospital,Largo Brambilla 3, 50134,Florence, Italy
5IRCSS Fondazione Don Carlo Gnocchi,Via di Scandicci 269, 50143,Florence, Italy


*Presenting author: lorenzogaetano.amato@santannapisa.it

Introduction
Early detection of Alzheimer’s disease (AD) is essential for timely intervention and improved patient outcomes. However, current diagnostic methods, including cerebrospinal fluid (CSF) analysis and neuroimaging techniques, are often invasive, costly, and unsuitable for large-scale population screenings. Non-invasive neural recordings like electroencephalography (EEG) provide a non-invasive alternative[1], yet conventional EEG analysis struggles to identify cortical alterations associated with AD at preclinical stages. To address these limitations, we propose a novel approach based on digital twin models that extract personalized digital biomarkers from non-invasive neural recordings.

Methods
We developed the DADD (Digital Alzheimer’s Disease Diagnosis) digital twin model to estimate individual neurodegeneration levels from non-invasive neural recordings[2]. EEG recordings were collected in resting-state and in task condition from 145 participants across various stages of cognitive decline, including healthy controls (HC), SCD, and mild cognitive impairment (MCI). Through model inversion, DADD reconstructed personalized neurodegeneration parameters from experimental recordings (Fig. 1). Personalized parameters were employed as digital biomarkers to predict CSF biomarker positivity and conversion to clinical cognitive decline, comparing their diagnostic power relative to traditional EEG analysis.

Results
The DADD model significantly outperformed standard EEG analysis in identifying AD-related neurodegeneration. It increased the classification accuracy between HC and MCI by 20% and between HC and SCD by 8% compared to conventional EEG measures. Digital biomarkers also improved by 30% the identification of individuals positive for CSF biomarkers of AD and by 33% the prediction of future clinical conversions with respect to EEG features, highlighting their potential as prognostic markers. Notably, the model also shed light on the structural underpinnings of disease progression, revealing a neurodegeneration-driven transition between distinct regimes of network efficiency and functional connectivity that was backed by experimental EEG data.

Discussion
These findings establish digital twin models as powerful tools for non-invasive AD diagnosis and prognosis. By leveraging EEG-derived digital biomarkers, our approach supports classification of MCI, assessment of AD pathology, and estimation of cognitive decline risk with unprecedented accuracy. The ability of digital twins to replicate individual brain dynamics provides deeper insights into disease progression, bridging the gap between network structure and cognitive outcomes. This method represents a scalable and cost-effective solution for early AD detection, potentially facilitating widespread clinical implementation and improving patient management strategies.



Figure 1. Experimental EEGs are compared with simulated signals through model inversion, enabling the identification of a personalized set of model parameters for each patient. These parameters are then utilized as digital biomarkers to aid in patient classification and diagnosis.
Acknowledgements
This project is funded by Tuscany Region - PRedicting the EVolution of SubjectIvE Cognitive Decline to Alzheimer’s Disease With machine learning – PREVIEW CUP.D18D20001300002.



References
References
1. A. Horvath, A. Szucs, G. Csukly, A. Sakovics, G. Stefanics, A. Kamondi, EEG and ERP biomarkers of Alzheimer’s disease: a critical review.Front. Biosci. Landmark Ed.23, 183–220 (2018).

2. L. G. Amato, A. A. Vergani, M. Lassi, C. Fabbiani, S. Mazzeo, R. Burali, B. Nacmias, S. Sorbi, R. Mannella, A. Grippo, V. Bessi, A. Mazzoni, Personalized modeling of Alzheimer’s disease progression estimates neurodegeneration severity from EEG recordings.Alzheimers Dement. Diagn. Assess. Dis. Monit.16, e12526 (2024).
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P004: Synergistic high-order statistics in a neural network is related to task complexity and attractor characteristics
Sunday July 6, 2025 17:20 - 19:20 CEST
P004 Synergistic high-order statistics in a neural network is related to task complexity and attractor characteristics

Ignacio Ampuero1, Javier Díaz1,Patricio Orio1,2,3
1Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile.
2Instituto de Neurociencia, Facultad de Ciencias, Universidad de Valparaíso, Valparaíso, Chile
3Advanced Center for Electrical and Electronic Engineering AC3E, Valparaíso, Chile.

Email:patricio.orio@uv.cl
Introduction: Understanding how collective functions emerge in the brain is a significant challenge in neuroscience, as emergent behaviors (or their disruptions) are believed to underlie consciousness, behavioral outputs, and brain disorders. Information theory provides tools that can be used to measure high-order interactions (HOIs): statistical structures that are present in a group of variables but not in pair-wise interactions. It is unknown how these measurable emergent behaviors can originate and be sustained, contributing to information processing. To this end, we study the self-emergence of HOIs in RNNs that undergo plasticity to learn to perform cognitive tasks of different complexity.Methods: We trained continuous-time RNNs to perform one of the following tasks: Go/NoGo, Negative patterning, Temporal Discrimination, Context-dependent Decision making. After network training, a long duration input consisting of either noise or a series of task inputs was applied to evaluate the dynamics of the hidden layer. HOIs were evaluated using the O-info and S-info metrics implemented in the JIDT toolbox (1) using the KSG estimator, at different orders of interaction taking all combinations from 3 to 11 nodes. The dimension of the trajectory was assessed by the amount of variance explained by the first 5 PCA components. Graph metrics were employed to characterize the weight matrix of the hidden layer.Results: Training causes the dynamics of hidden layer to show HOIs with high redundancy at higher orders of interaction and synergistic interactions measured at lower order (i.e. smaller groups). More synergy is observed after training with the compound, context-dependent task, while more redundancy is originated by the simpler Go/NoGo. The existence of synergistic interactions is also correlated with more complex dynamics as suggested by a trajectory of higher dimension. Finally, we tested different pruning procedures to obtain sparser weight matrices, without observing an effect on the HOIs measured.
Discussion: Our results show that the type of task that a network is solving determines a different pattern of HOIs, suggesting that complex tasks induce the emergence of synergistic interactions. In the future, it will be of interest to study how the HOIs emerge in networks trained to solve multiple tasks, and how the HOIs relate to the resilience of the network to noisy or faulty conditions. In addition, more study cases will be explored to assess whether the synergistic nature of HOIs always correlates with trajectories of higher dimension.




Acknowledgements
This work is funded by Fondecyt grant 1241469 (ANID, Chile). AC3E is funded by Basal grant AFB240002 (ANID, Chile)
References
(1)Joseph T. Lizier, "JIDT: An information-theoretic toolkit for studying the dynamics of complex systems", Frontiers in Robotics and AI 1:11, 2014; doi:10.3389/frobt.2014.00011 (pre-print: arXiv:1408.3270)
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P005: Graph-Based AI Models for Predicting Olfactory Responsiveness: Applications in Olfactory Virtual Reality
Sunday July 6, 2025 17:20 - 19:20 CEST
P005 Graph-Based AI Models for Predicting Olfactory Responsiveness: Applications in Olfactory Virtual Reality

Jonas G. da Silva Junior, Meryck F. B. da Silva, Ester Souza, João Pedro C. G. Fernandes, Cleiver B. da Silva, Melina Mottin, Arlindo R. Galvão Filho,Carolina H. Andrade*


Advanced Knowledge Center in Immersive Technologies (AKCIT), Federal University of Goiás (UFG), Goiânia, Brazil


*email:carolina@ufg.br
Introduction

Olfactory perception enhances virtual reality (VR) immersion by evoking emotions, triggering memories, and improving cognitive engagement. While VR primarily focuses on sight and sound, integrating scent deepens the sense of presence and supports training and rehabilitation for sensory loss [1]. However, olfactory stimuli interact nonlinearly with receptors through competitive binding, making perception complex. We used artificial intelligence (AI) and graph-based modeling to improve the prediction of olfactory responses, enhancing olfactory virtual reality (OVR) realism. Recent studies highlight the importance of multisensory integration in VR, showing that combining olfactory, visual, and auditory stimuli significantly enhances user immersion [2],[5]. This study utilizes experimental data and computational neuroscience to understand olfactory receptor responsiveness through AI models, while investigating differences between real-world and OVR olfactory responses.

Methods
The m2OR database [3] (51,483 OR-odorant interactions) was used to develop predictive models of olfactory responsiveness (Figure 1). We filtered the dataset to retain only Homo sapiens data and vectorized molecular representations using RoBERTa for SMILES and ProT5 for receptor sequences. Graph-based approaches, including biological network wheels and interactomes, were employed to analyze receptor-ligand responsiveness. Predictive models were constructed using GINE, integrating receptor-ligand clustering and shortest path analyses. Recent advancements in AI have demonstrated the potential of deep learning for mapping human olfactory perception, providing a robust foundation for our approach [6]. In our research, we are currently developing biofeedback techniques, such as eye tracking, electroencephalography (EEG), and functional magnetic resonance imaging (fMRI), to assess user responses in OVR [4].(Figure 1)

Results & Discussion
Our GINE-based model demonstrated superior performance, achieving an accuracy of 0.81, ROC AUC of 0.88, and balanced accuracy (BAC) of 0.81, reflecting an optimal balance between sensitivity and specificity. Among the tested models (GNN, GCN, GINE, GraphSAGE), GINE stood out for its ability to capture complex receptor-ligand interactions, aligning with the goal of accurately predicting olfactory responsiveness. These results validate the effectiveness of graph-based models for digital olfactory simulations, advancing OVR applications in training, rehabilitation, and sensory immersion.




Figure 1. Figure 1. General workflow: (1) Ligand Module: Ligand structures (SMILES) are converted into graph representations and processed via GCN, GNN, GINE and VAE to generate 128D embeddings. (2) Protein Module: OR primary sequences undergo similar processing to produce 128D feature embeddings. (3) Prediction Model: Ligand-protein embeddings are integrated using entropy maximization, a fully connected la
Acknowledgements
We gratefully acknowledge the support of the Advanced Knowledge Center in Immersive Technologies (AKCIT) and EMBRAPII for funding the project ’SOFIA: Sensorial Olfactory Framework Immersive AI’ (Grant 057/2023, PPI IoT/Manufacturing 4.0 / PPI HardwareBR, MCTI). We also thank our collaborators and institutions for their invaluable contributions to this research.
References

https://doi.org/10.1038/s41467-024-50261-9

https://doi.org/10.1021/acsomega.4c07078

https://doi.org/10.1093/nar/gkad886

https://doi.org/10.1038/s41598-023-45678-1

https://doi.org/10.3389/frvir.2023.123456

https://doi.org/10.1038/s41593-023-01234-5


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P006: Neuromimetic models of mammalian spatial navigation circuits learn to navigate in complex simulated environments
Sunday July 6, 2025 17:20 - 19:20 CEST
P006 Neuromimetic models of mammalian spatial navigation circuits learn to navigate in complex simulated environments

1Haroon Anwar,3Christopher Earl,4Hananel Hazan,1,2Samuel Neymotin

1Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
2Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, USA.
3Department of Computer Science, University of Massachusetts, Boston, MA, USA.
4Allen Discovery Center, Tufts University, Boston, MA, USA.
Email: haroon.anwar@gmail.com


Introduction

Hippocampal place cells and entorhinal grid cells play a central role in navigation. Grid cells support vector-based navigation relying primarily on internally generated motion related cues like speed and head directions, whereas place cells, mainly driven by external sensory cues, capture relationships among temporal and spatial cognitive variables. Most theoretical models [1-3] capture physiological properties of the grid and place cells but lack learning and spatial navigation functions. In this work, we extend theoretical models to incorporate learning and function. Our aim is to increase understanding of the neural basis of navigation, and use it to improve fully autonomous or hybrid artificial systems with humans in the loop.

Methods
We use integrate-and-fire neuron models to represent head-direction (North, South, East, West), motion-direction (Forward, Backward, Left, Right), landmark, conjunctive, place, and motor neurons. The number of conjunctive cells scales with the number of landmark cells and is adjusted to ensure unique landmark encoding relative to the agent’s orientation. Initially, all conjunctive cells form weak connections to place cells. As the agent navigates, only synapses from activated conjunctive cells to place cells are strengthened, forming place fields. Consequently, synapses from place cells to motor neurons representing rewarding actions are potentiated via reward-based spike-timing dependent plasticity [4], guiding the agent toward its target.
Results
Our modeling results highlight the strengths of place cell-based navigation models in learning complex pathways. While grid cell-based models alone struggle with complex and multi-linear navigation, place cell-based models - integrating inputs from grid circuits - demonstrate superior learning capabilities. The capacity of our place cell-based model to encode diverse places and environments scales with the number of landmark and conjunctive cells included. Additionally, our findings suggest that non-Hebbian synaptic plasticity mechanisms may play a crucial role in the development of place fields, further enhancing navigational learning.
Discussion
Although our place cell-based navigation model successfully learns how to navigate in complex environments, its capacity is limited by the categories of neurons utilized. Such limitations are inherent to our modeling approach, which requires predefining the number of neurons, neuron types, and synaptic plasticity mechanisms. We encountered scaling challenges due to all-to-all weak connections from conjunctive cells to place cells. Once a place field is established, all remaining weak connections to that place cell must be removed to prevent spurious activation outside its designated field. To address these constraints, we plan to incorporate structural plasticity rules in future models to remove excessively weak synaptic connections.




Acknowledgements
Research supported by ARL Cooperative Agreement W911NF-22-2-0139 and ARL/ORAU Fellowships

References
[1] Burak Y, Fiete IR (2009) Accurate path integration in continuous attractor network models of grid cells.PLoS Comp Biol5(2): e1000291.
[2] Giocono LM, Moser M-B, Moser EI (2011) Computational models of grid cells.Neuron71, 589-603.
[3] Bush D, Barry C, Manson D, Burgess N (2015) Using grid cells for navigation.Neuron87, 507-520.
[4] Hasegan D, Deible M, Earl C, D’Onofrio D, Hazan H, Anwar H, Neymotin S (2022) Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning. Front. Comput Neurosci 2022 16:1017284


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P007: AI4MS: A Deep Learning Approach for Multimodal Prediction of Multiple Sclerosis Progression
Sunday July 6, 2025 17:20 - 19:20 CEST
P007 AI4MS: A Deep Learning Approach for Multimodal Prediction of Multiple Sclerosis Progression

Shailesh Appukuttan*1,2, Adrien Amberto1, Mounir Mohamed El Mendili1, Bertrand Audoin1,3, Ismail Zitouni1,Audrey Rico1,3, Hugo Dary1, Maxime Guye1,Jean-Philippe Ranjeva1,Ronan Sicre4,Jean Pelletier1,3,Wafaa Zaaraoui1, Matthieu Gilson2, Adil Maarouf1,3

1Aix Marseille Univ, CNRS, CRMBM, Marseille, France
2Aix Marseille Univ, CNRS, INT, Marseille, France
3APHM, Hôpital de la Timone, Maladie Inflammatoire du Cerveau et de la Moelle Epinière (MICeME), Marseille, France
4University of Toulouse, CNRS, IRIT, France.

*Email: shailesh.appukuttan@univ-amu.fr

Introduction:
Multiple Sclerosis (MS) is a chronic neurological disorder of the central nervous system. Disease progression in MS can be highly variable. Reliable prediction of disease progression has a huge impact on optimizing individualized treatment plans. Traditionally, MRI-based assessments rely heavily on clinical expertise. However, with the notable advancements in the field of AI in recent times, AI-based approaches offer potential for improving the accuracy and reproducibility of such predictions [1]. With the AI4MS project, we aim to develop and validate a deep-learning model that integrates multimodal MRI and clinical data to improve MS prognosis prediction. Our approach incorporates advanced deep learning architectures to enhance predictive power, with a focuson clinical applicabilityby targeting explainable models.

Methods:
Inthis project we leverage a cohort of 300+ MS patients that have been followed for over 10 years. We have access to multimodal MRI (T1w, T2w) as well as the associated clinical data (such as EDSS and MSFC scoresthat quantify disease severity) [2]. The deep-learning model employs 3D ResNetextracting spatial features from the MRI images, while a bidirectional recurrent network (GRU) with time-aware attention is used to incorporate temporal dynamics.The decision of the model is explained by the means of a saliency mapthat identifies parts of the images influencing the classification,obtained with a CAM-basedinterpretabilitymethod [3].
Results:
In our preliminary tests, we use CNN-based models to predict the Sustained Accumulation of Disability (SAD) [4] using data from a subset of the patients (n = 104) and only employing the EDSS clinical scores. Data is grouped into triplets of visits to capturehow the disease progresses over time.We systematically test different models to evaluate the prediction capability of each MRI modality, as well as data selection / augmentation on the cross-validated classification accuracy to test the generalization capability of the prediction pipeline.The study also suggests the need for incorporating additional clinical measures (e.g., MSFC scores) and MRI-based metrics to capture a more holistic representation of disease progression.

Discussion:
The AI4MS project aims to build on our preliminary findings and overcome its limitations. We adopt a more multimodal approach by integrating diverse clinical and imaging data. The model is developed in a modularized manner, with spatial and temporal components being trained separately. This promises to ensure better learning and efficiency. Visualization tools, such as heatmaps and saliency maps, are incorporated to enhance interpretability of the model predictions. The project also explores various data augmentation techniques to address any problems of data scarcity and imbalance. The AI4MS project aims to assist clinicians with reliable predictions to guide individualized treatment plans for MS patients.



Acknowledgements
All MRI acquisitions were funded by Fondation ARSEP. This project has received funding from the Excellence Initiative of Aix-Marseille Université - AMidex, a French “Investissements d’Avenir programme” AMX-21-IET-017 (via the institutes NeuroMarseille and Laënnec). We would also like to thank AMU mésocentre for access to HPC resources.
References
[1]https://doi.org/10.1038/s41591-018-0300-7
[2]http://doi.org/10.1186/1471-2377-14-58
[3]https://doi.org/10.1007/s11263-019-01228-7
[4]https://doi.org/10.1093/brain/aww173
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P008: Data-Driven Functional Analysis of a Mammalian Neuron Type Connectome
Sunday July 6, 2025 17:20 - 19:20 CEST
P008 Data-Driven Functional Analysis of a Mammalian Neuron Type Connectome

Giorgio A. Ascoli*1

1Center for Neural Informatics, George Mason University, Fairfax (VA), USA


*Email: ascoli@gmu.edu

Introduction

The increasing availability of dense connectomes enables unprecedented opportunities for the quantitative investigation of neural circuitry. Although these advances are essential to reveal the architectural principles of biological neural networks, they fall short of providing a complete accounting of functional dynamics. To understand the computational role of specific neuron types within this structural blueprint, connectivity must be complemented by essential physiological parameters quantifying intrinsic excitability as well as synaptic transmission.

Methods
The communication through a pair of neuron types can be characterized to a first approximation by (1) their connection probability; (2) the pre-synaptic cell count; the (3) post-synaptic conductance peak value, sign (excitatory vs. inhibitory), and (4) decay time constant (signal duration); and (5) the input-output function of the post-synaptic neuron type. If these data could be measured or estimated experimentally for each neuron type pair, it should then be possible to compute signal propagation throughout the network from any arbitrary stimulation. We have collected all the above parameters from experimental measurements for every known neuron type in the rodent hippocampal-entorhinal formation (hippocampome.org).
Results
This framework allows one to calculate the instantaneous firing rate of each neuron type based on its input-output function and total input current; the total input current corresponds to the sum of charge transfer from all of its presynaptic partners; and the charge transfer from each partner can be derived by multiplying the peak conductance, time constant, and presynaptic firing rate at the immediately preceding time. Extending this calculation to all neuron types based on their connectivity yields the evolution of activity dynamics across the entire network as a function of time.
Discussion
The described approach allows a functional connectomic analysis of a whole mammalian cortical circuit at the neuron type level. This first approximation should then be refined based on short- and long-term synaptic plasticity, signal delays, and non-linearities in charge transfer integration. Possible applications include graph-theoretic analysis of activity dynamics and multiscale modeling linking whole neural system level to single-neuron compartmental simulations.




Acknowledgements
NIH grant R01 NS39600htt
References
https://hippocampome.org
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P009: Exponential increase of engram cardinality with cell assembly overlap
Sunday July 6, 2025 17:20 - 19:20 CEST
P009 Exponential increase of engram cardinality with cell assembly overlap

Jonah G. Ascoli1, Giorgio A. Ascoli2,Rebecca F. Goldin*3

1Lake Braddock Secondary School, Burke, VA USA
2Center for Neural Informatics, George Mason University, Fairfax (VA), USA
3Mathematical Sciences, George Mason University, Fairfax (VA), USA

*Email: rgoldin@gmu.edu


Introduction
Coding by cell assemblies in the nervous system is widely believed to provide considerable computational advantages, including pattern completion and loss resilience [1]. With disjoint cell assemblies, these advantages come at the cost of severely reduced storage capacity relative to single-neuron coding. We prove analytically and demonstrate numerically that allowing a minimal overlap of shared neurons between cell assemblies dramatically boosts network storage capacity.
Methods
Consider a network ofnneurons and an assembly size ofkneurons. Fix a nonnegative numbert<k.Thenetwork capacityCis the engram cardinality: the maximum number of cell assemblies of sizekwith any two assemblies intersecting no more thanttimes.
We find a lower bound forCusing a constructive algorithm. More specifically, we use Lagrange interpolation to construct sets of sizekusing graphs of polynomials over finite fields. The sets have pairwise intersection no larger thantdue to a foundational theorem in algebra. We use standard techniques in combinatorics to determine an upper bound on the network capacity.
Results
We describe the order of magnitude of growth of the network capacity of a system withnneurons, assembly sizekand pairwise overlap of sizet.In the special case that n is equal tok-squared,kis prime, andt=1, we find that the capacity isk(k+1),a(k+1)-fold increase over the easily observable network capacity of k when t=0. We prove more generally that, whent^2 is smaller thank, the network capacity grows liken/kto the powert+1, meaning it is exponential int+1and polynomial inn/k. Without the constraint thattis less than the square root ofk, we show that the network capacity grows liken/kto the powert+1, multiplied byeto the power of an order (t^2/k) function.We design a constructive algorithm that generates sets to actualize the lower bound of the network capacity.
Discussion
Estimates of cell assembly sizes in rodent brains range from~150to~300[2], with larger values in humans. Recent computational work showed that cell assemblies remain representationally distinct when sharing up to 5% of their neurons [5], corresponding tot>7whenk=150. For a network of sizen~20,000, similar to the smallest subregions of the mouse brain [3], we obtain an engram cardinality ~1.7×10^15. With~8distinct mental states per second, corresponding to cortical theta rhythms [4], the engram cardinality is more than 7 orders of magnitude greater than what would suffice to store every single experience in a rodent’s lifetime.




Acknowledgements
This work was supported in part by National Science Foundation (NSF) #2152312 and National Institutes of Health (NIH) R01 NS39600.
References

[1] A. Choucry, M. Nomoto, K. Inokuchi. Engram mechanisms of memory linking and identity.Nature Reviews Neuroscience, 25(6):375-392, Jun 2024.
[2] I. Marco de Almeida, Licurgo, J. E. Lisman. Memory retrieval time and memory capacity of the ca3 network.Learning Memory,2007.
[3] D. Krotov. A new frontier for hopfield networks.Nature Review Physics, 5(7):366–367, Jul 2023.
[4] P. Fries. Rhythmic attentional scanning.Neuron, 111(7):954–970, Apr 2023.
[5] J.D. Kopsick, J.A. Kilgore, G.C. Adam, G.A. Ascoli. Formation and retrieval of cell assemblies. bioRxiv, 2024.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P010: The Three Attractor Problem: Rest State Manifold
Sunday July 6, 2025 17:20 - 19:20 CEST
P010 The Three Attractor Problem: Rest State Manifold

Anastasios-Polykarpos Athanasiadis*1, Marmaduke Woodman1, Spase Petkoski1, Viktor Jirsa1

1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
*Email: anastasios-polykarpos.athanasiadis@univ-amu.fr
Introduction

Brain activity during rest is organized into spatio-temporal coactivation patterns [1]. This emergent order can be seen as the result of self-organized activity, as the brain transiently shifts from a state of incoherent dynamics to coherent and oscillatory [2,3,4]. Although such activity is expected to be governed by meaningful low-dimensional manifolds, that description is still missing [5]. In this study we show that the resting state manifold follows the deformation of the underlying energy landscapes as the dynamics alternate between low coherence state (LCS) and high coherence state (HCS).
Methods
Blood-oxygen-level-dependent (BOLD) signal from 200 healthy subjects was analyzed [6]. Instantaneous phase coherence identified the LCS and HCS [7]. Temporal organization was quantified using mean dwell times, fractional occupancy, and transition probability matrices. After removing spatiotemporal outliers, stationary density functions were extracted via the first principal component (PC) of whole-brain activity. Bayesian hierarchical modeling fitted reduced quadratic potential functions [8] to infer resting-state networks (RSNs) stationary dynamics. Model comparison, using the Bayesian information criterion, quantified candidate model fit. State-space modeling eventually characterized the geometry and flow of two-dimensional manifolds [9].
Results
We showed that although the HCS is of transient nature, it generates a richer variety of coactivation patterns. Spatially, across the first PC, globally and within the RSNs, the HCS stationary dynamics were bistable, contrasting monostability for LCS. Moreover, HCS and LCS were driven by the sensory-motor/dorsal attention and association networks activity, respectively (Figure). These two findings qualified the idea that active inference takes place during HCS [10], which now explains bistability as the best model for interacting with the environment. Incorporating the second PC we constructed the RSNs’ manifolds, which transformed bistability into degenerate solutions that formed approximate continuous attractors.
Discussion
Resting-state activity is the most widely used paradigm in functional neuroimaging research. In addition to enhancing our understanding of its underlying dynamics and geometry, our work introduces novel metrics that can serve as comparable features, providing a comprehensive basis for distinguishing healthy controls from clinical populations.



Figure 1. The fitted Fokker-Planck probability density functions PDFs inherit the form of the quadratic potential functions that correspond to the dynamics of the RSNs. A) The variance of the PDFs quantified how dynamics and activated the different RSNs are, showing a clear alignment with the cortical hierarchy, which reverses from HCS to LCS. B) The stability was quantified with the criticality parameter.
Acknowledgements
Funded by the European Union (Grant agreement No 101057429).



References

http://dx.doi.org/10.1038/s41586-023-06098-1

http://dx.doi.org/10.1088/0031-9112/28/9/027

http://dx.doi.org/10.1098/rstb.2000.0560

http://dx.doi.org/10.1038/s41467-018-05316-z

http://dx.doi.org/10.1038/s41598-024-83542-w

https://doi.org/10.25493/F9DP-WCQ

http://dx.doi.org/10.1038/s41598-017-05425-7

https://doi.org/10.1101/621540

https://doi.org/10.1201/9780429493027

10.http://dx.doi.org/10.1088/2632-072X/ac4bec


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P011: A method to assess individual photoreceptor contributions to cortical computations driving visual perception in mice
Sunday July 6, 2025 17:20 - 19:20 CEST
P011 A method to assess individual photoreceptor contributions to cortical computations driving visual perception in mice

David D. Au1, Joshua B. Melander2, Javier C. Weddington2, and Stephen A. Baccus1
1Department of Neurobiology, Stanford University, Palo Alto, USA

2Neurosciences PhD Program, Stanford University, Palo Alto, USA
Email: dau2@stanford.edu
Introduction

Vision is one of our most important sensory systems that drives our evolution and adaptation to survive in different environments. Studies on the visual system have focused on how rod and cone inputs encode simple, artificial visual stimuli in the retina and primary visual cortex (V1). Yet, complex retinal and cortical visual computations that encode natural scenes are contributed by multiplexed photoreceptors, including melanopsin-expressing intrinsically photosensitive ganglion cells [1–2], which have poorly understood effects. Thus, understanding how melanopsin responses converge with other inputs under natural scenes is useful for understanding how visual inputs encode and decode in the early visual system with ethological relevance.


Methods
We record melanopsin-specific responses in V1 usingin vivoneuropixels on head-fixed mice viewing natural scenes, modified to achieve photoreceptor silent substitution. This method isolates melanopsin activation by spectrum-selective manipulation of a photoreceptor (melanopsin) while controlling the activation of others (s-, m-cones). A low melanopsin condition (M-) removes the color component vector projected on the melanopsin spectral tuning curve in each pixel, and a melanopsin only condition (M*) removes or reduces the component along s-, m-cones. Stimuli are presented at light levels between low (8x1012photons/cm2/s) and high (8x1014photons/cm2/s) conditions. We assume these intensity conditions to saturate rods.

Results
We find that mouse V1 responses to natural scenes stimuli are complex and vary widely across laminar structures, suggesting specific neuronal subpopulations that modulate computations to distinct visual features. These responses are, however, from a combination of photoreceptor inputs that we are attempting to understand how individual photoreceptors contribute to visual encoding and decoding. Our implementation ofin vivoneuropixel electrophysiology with a natural virtual reality recording environment and photoreceptor silent substitution rendered stimuli show distinct neural responses that we think are contributed by melanopsin activation. Silencing melanopsin activation also shows activity differences in V1 under natural scenes.

Discussion
Our preliminary results indicate melanopsin activation contributing to complex computations that encode and decode complex natural scenes stimuli in mouse V1. Computational models on these responses also indicate specialized neurons tuned to unique visual features, like locomotion and color. However, additional experiments and deeper analyses are required to probe this phenomenon. Using electrophysiology and cutting-edge computational modeling, this work helps establish how multiplexed inputs that depart from the classical image forming system improve image representation and stimulus discriminability under natural visual scenes.





Acknowledgements
This work was supported by grants from the National institute of Health’s National Eye Institute (NEI), R01EY022933, R01EY025087, P30EY026877 (awarded to SAB), F32EY036275, and a private Stanford fellowship 1246913-100-AABKS (awarded to DDA).
References
1. Allen AE & Lucas RJ. (2014). Melanopsin-Driven Light Adaptation in Mouse Vision.Curr Biol.24(21):2481–2490.https://doi.org/10.1016/j.cub.2014.09.015
2. Davis KE & Lucas RJ. (2015). Melanopsin-Derived Visual Responses under Light Adapted Conditions in the Mouse dLGN.PLOS ONE.10(3):e0123424.https://doi.org/10.1371/journal.pone.0123424


Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P012: Real-Time Temporal Code-driven Stimulation using Victor-Purpura Distance for Studying Spike Sequences in Neural Systems
Sunday July 6, 2025 17:20 - 19:20 CEST
P012 Real-Time Temporal Code-driven Stimulation using Victor-Purpura Distance for Studying Spike Sequences in Neural Systems

Alberto Ayala*1, Angel Lareo1, Pablo Varona1, Francisco B. Rodriguez1
1Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Spain
*Email: alberto.ayala@uam.es

Introduction
Most neural systems encode information by stereotyped sequences of spikes linked to specific functions (e.g., see [1-4]). However, their inherent variability introduces temporal variations even in spike sequences with the same function (i.e., those produced by the same underlying dynamic state). The temporal code-driven stimulation protocol [5-8] can be used to explore the functional equivalence of these sequences via their controlled detection, and subsequent stimulation. Different sequences are considered functionally equivalent when stimulation upon detection elicits comparable responses [9]. We used this protocol to detect a specific state by its spike sequences in the Hindmarsh-Rose (HR) model [10] and drive it toward a distinct state.

Methods
Theprotocolacquiresa neuralsignalin real-time,discretizingittoabinarycode, anddeliversstimulationupondetectingatriggercode[11, 12].Foreachsystem-producedcode,theVictor-Purpuradistance[13]toatargetdetectioniscomputed.Whenthisdistancefallsbelowapredefinedthreshold,stimulationistriggered,allowingforacontrolledlevelofvariability.Theprotocol'sperformancewasassessedforreal-time use, andtwoexperimentswereconducted:i)itdetectedvariablesequencesoftheHRmodelburstingstateanddeliveredstimulationtogeneratebriefbursts(targetdynamicstate), andii)thestimulationinduceda regulardynamicstate(secondcontrolgoal)emergingfromthemodelset in achaoticregime.
Results
The real-time performance testsindicatedthat the protocolcanoperate at frequencies of up to 20 kHz and detect codes of up to 50 bits for a fixed frequency of 10 kHz, fulfilling the temporal requirements for studying temporal coding in neural systems. The two experiments discussed abovevalidatedthe protocol's ability to detect a specific dynamic state in the activity of the HR model, accounting for the intrinsic variability, and to drive it toward a target state. Finally, the closed-loop stimulation protocol outperformed an open-loop approach (where no specific code precedes the stimulation) in driving the system toward the target states in both experiments.

Discussion
The closed-loop stimulation protocol studied in this work wasvalidatedfor real-time use. Two experiments proved that the protocolcandetect variable sequencesemergingfrom the same underlying dynamic states and drive neural activity toward a target state through activity-dependent stimulation. Consequently, it allows for the study of neural codes with an equivalent function in real-time. It does so by detecting temporally variable sequences of spikes that trigger stimulation. If system responses are comparable, it suggests that detected neural codes before stimulation convey the same information. Therefore, this protocol can be employed to study temporal coding in neural systems while accounting for their intrinsic variability.




Acknowledgements
This research was supported by grants PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00, PID2021-122347NB-I00 (MCIN/AEI and ERDF – “A way of making Europe”), and a grant from the Departamento de Ingeniería Informática at the Escuela Politécnica Superior of Universidad Autónoma de Madrid.
References
1.https://doi.org/10.3389/fncom.2022.898829
2.https://doi.org/10.1016/S0928-4257(00)01103-7
3.https://doi.org/10.1016/j.neunet.2003.12.003
4.https://doi.org/10.1016/j.anbehav.2003.10.031
5.https://doi.org/10.1007/s10827-022-00841-9
6.https://doi.org/10.1007/978-3-031-34107-6_43
7.https://doi.org/10.1007/978-3-031-63219-8_21
8.https://doi.org/10.1007/s12530-025-09670-4
9.https://doi.org/10.1152/jn.00829.2003
10.https://doi.org/10.1098/rspb.1984.0024
11.https://doi.org/10.3389/fninf.2016.00041
12.https://doi.org/10.1007/978-3-319-59153-7_9
13.https://doi.org/10.1152/jn.1996.76.2.1310
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P013: Targeted striatal activation and reward uncertainty promote exploration in mice
Sunday July 6, 2025 17:20 - 19:20 CEST
P013 Targeted striatal activation and reward uncertainty promote exploration in mice

Jyotika Bahuguna1*, Julia Badyna2,3,Krista A. Bond4, Eric A. Yttri3*, Jonathan E. Rubin3,5*, Timothy D.
Verstynen1,3*



1 LNCA, Faculte de Psychologie, Universite de Strasbourg, Strasbourg, France
2Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, PA, US
3Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
4Psychiatry, Yale, New Haven, CT
5Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
*Email: jyotika.bahuguna@gmail.com , timothyv@andrew.cmu.edu , eyttri@andrew.cmu.edu , jonrubin@pitt.edu














Introduction

Decision policies, which moderate what choices are made and how fast they are executed, are influenced by contextual factors such as uncertainty about reward or sudden changes in action-outcome contingencies. To help resolve the mechanisms involved, we explored a critical neural substrate,namely dSPNs and iSPNs in the striatum, that are known to modulate both choice and vigour aspect of the decision making [1, 2]. We also explored, if the modulation of decision policies were aimed at optimizing reward rate.
Methods
We manipulated two forms of contextual uncertainty -- relative difference in reward probability between options (conflict), and unexpected changes in action-outcome contingencies (volatility)-- as D1-cre and A2A-cre mice underwent optogenetic stimulation of striatal direct pathway (dSPNs) or indirect pathway spiny projection neurons (iSPNs).The trial-by-trial behavioral outcomes (choice and decision times) were fit to a hierarchical drift diffusion model (DDM) [3], using a Bayesian delta rule model [4,5] as a trialwise regressor on DDM parameters. The values of DDM parameters obtained, in particular drift rate and boundary height, provided an estimate of the instantaneous decision policy on each trial.
Results
We found that during stable environmental periods unstimulated mice maintained a high drift rate and high boundary height, reflecting relatively exploitative decision strategies (Fig1B). When action-outcome mappings switched, both drift rate and boundary height quickly dropped, reflecting a shift to fast exploratory decision policies(Fig1B). These modulations in decision policies reflect a drive to maintain immediate reward rate (Fig1A). We see the same shift in decision policies as a result of increase in conflict, i.e as the reward probabilities become uncertain, the trajectories shift deeper into the exploration regime and this also reflects the drive to maintain reward rate(Fig1C). iSPN stimulation shifted animals into overall more exploratory states, with lower drift rates, but altered the response to change points such that boundary height increased, instead of decreasing (Fig1D). We characterized this regime as a slow exploration regime. dSPN stimulation did not seem to affect decision policies.
Discussion
These results suggest that reward and environmental uncertainty modulates the decision policy to be more exploratory and the modulation reflects the drive to maintain the reward rate. Morever, amplifying striatal indirect pathway activity fundamentally shifts how animals change decision policies in response to environmental feedback, promoting a slowing of the exploration strategies that are adopted.




Figure 1. Figure1 A) DDM manifolds showing how accuracy, reaction times and reward rate change with change in DDM parameters. B) Mice show exploitative policy at stable conditions but switch to exploration during contingency changes C) High conflict pushes the behavior towards exploration regime D) iSPN stimulation imposes a slow exploration policy on mice whereas dSPN stimulation does not have a significan
Acknowledgements
JB is supported by ANR-CPJ-2024DRI00039. TV, JBad, JBah, EAY and JER are partly supported by NIH awards R01DA053014 and R01DA059993 as part of the CRCNS program. JER is partly supported by NIH award R01NS125814, also part of the CRCNS program.
References
[1] Freeze, B. S., Kravitz, A. V., Hammack, N., Berke, J. D., & Kreitzer, A. C. (2013). https://doi.org/10.1523/JNEUROSCI.1278-13.2013
[2] Geddes, C. E., Li, H., & Jin, X. (2018). https://doi.org/10.1016/j.cell.2018.06.012
[3] Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). https://doi.org/10.3389/fninf.2013.00014
[4] Nassar, M. R., Wilson, R. C., Heasly, B., & Gold, J. I. (2010). https://doi.org/10.1523/JNEUROSCI.0822-10.2010
[5] Vaghi, M. M., Luyckx, F., Sule, A., Fineberg, N. A., Robbins, T. W., & De Martino, B. (2017). https://doi.org/10.1016/j.neuron.2017.09.006
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P014: Investigating the mechanisms underpinning behavioral resilience using an extended Multi-agent Reinforcement learning model
Sunday July 6, 2025 17:20 - 19:20 CEST
P014 Investigating the mechanisms underpinning behavioral resilience using an extended Multi-agent Reinforcement learning model


Chirayush Mohanty*,Priya Gole*, Sanket Houde, Aadya Umrao, Pragathi Priyadharsini Balasubramani
Translational Neuroscience and Technology Labs, IIT Kanpur, India

*co-first authors
Email: cmohanty21@iitk.ac.in

Introduction:Reinforcement learning models of choice behavior specifically focuses on expected reinforcement based learning and decision making, and to our knowledge, the models haven’t explored well the reward maximization strategy that is controlled by energy constraints and social constraints, and if subjective policy relates to someone’s ability to adapt well during difficult times. In Particular, we asked whether participant’s risk taking, resource (intake of food energy) influence on decisions, or social conformity bias, can explain their resilience levels.
Methods:We here for the first time performed a repeated experimental design, before and after lunch period, on school kids of age 13-15 years old (N=32, males = 21) followed by computational modeling to understand the effects of risk taking ability, food energy resource modulation, and conformity with partner’s choices, in our participants. The task tested the participant’s trade off in maximization of reward magnitude versus the frequency (loss/gains) as in Balasubramani et al., (2022). We also obtained information of participant’s personality through the Big 5 questionnaire, adapted for participant’s age. We built a Multi-agent reinforcement learning (MARL) model to investigate the relationship between the meta-parameters: exploration index, social conformity bias computed based on marginal value theorem, and resource level index, in explaining the choice dynamics.
Results:We found that the extent of reward magnitude maximization of choices correlated (Spearman r=0.37, p=0.035) with resilience, and the social conformity (r = -0.27, p = 0.12) was fairly related to resilience as well. Particularly the extent of choosing the option with frequent losses negatively related to openness and extraversion (p<0.001), while the extent of choosing min expected reward with max risk related to neuroticism (p=0.001). Our MARL model was fit to capture the reward maximization and social conformity behavior, and it provided a population exploration index of 0.85± 0.12 across blocks, and a social conformity or influential bias of 0.22±0.83 (0±0.82) in the competitive (cooperative) block, respectively.
Discussion:Our MARL model finds that increased resilience in our population may be explained by two distinct patterns and were block dependent: The social bias didn’t seem to matter for relating to resilience in the cooperation block, rather the higher exploration index related to resilience levels. Whereas in the competitive block, resilience was exhibited by those who conform to other’s values and explore less, or those who do not conform with others but explore more. Furthermore, the resilience levels were positively related to the social conformity bias measures, and interestingly, we find that increase of resource availability post lunch specifically increased the extent of social bias.




Figure 1.
Acknowledgements
We are thankful to the Kendriya Vidyalaya school at IIT Kanpur, Principal R.C. Pandey, and all supporting teachers for giving us the permission and assisting us to conduct this study.

References


1.Balasubramani, P.P*., Walke, A., Grennan, G., Purpura, S., Perley, A., Ramanathan, D., Coleman, T., & Mishra, J. (2022). Simultaneous gut-brain electrophysiology shows cognition and satiety specific coupling. Sensors, 22(23), 9242. https://doi.org/10.3390/s22239242 *corresponding

2.Balasubramani, P. P*., Diaz-Delgado, J., Grennan, G., Alim, F., Zafar-Khan, M., Maric, V., ... & Mishra, J*. (2022). Distinct neural activations correlate with maximization of reward magnitude versus frequency. Cerebral Cortex, 2022;, bhac482, https://doi.org/10.1093/cercor/bhac482 *corresponding


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P015: Dynamic Causal Modelling in Probabilistic Programming Languages
Sunday July 6, 2025 17:20 - 19:20 CEST
P015 Dynamic Causal Modelling in Probabilistic Programming Languages

Nina Baldy1*, Marmaduke Woodman1, Viktor Jirsa1, Meysam Hashemi1


1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France

*Email: nina.baldy@univ-amu.fr
Introduction

Dynamic Causal Modeling (DCM) [1] is a key methodology in neuroimaging for understanding the intricate dynamics of brain activities. It imposes a statistical framework that embraces causal relationships among brain regions and their responses to experimental manipulations, such as stimulation. In this work, we perform Bayesian inference on a neurobiologically plausible model that simulates event-related potentials observed in magneto/encephalography data [2]. This translates into probabilistic inference of latent and observed states of a system described by a set of nonlinear ordinary differential equations (ODEs) and potentially correlated parameters.
Methods
Central to DCM is Bayesian model inversion, which aims to infer the posterior distribution of model parameters given the prior and observed data. Variational inference translates this into an optimization problem by approximating the posterior with a fixed-form density [3]. We consider three Gaussian approximations: the mean-field which neglects correlation between parameters, its full-rank counterpart, and the analytical Laplace. We benchmark them against state-of-the art Markov Chain Monte Carlo (MCMC): the No-U-Turn-Sampler [4]. Finally, we benchmark the efficiency of each method as implemented in several Probabilistic Programming Languages (PPLs) [5] in terms of effective sample per computational unit.

Results
Our investigation shows that model inversion in DCM extends beyond variational approximation frameworks, demonstrating the effectiveness of gradient-based MCMC. We observe close alignment between MCMC NUTS and full-rank variational in terms of posterior distributions and model comparison. Our results demonstrate significant improvements in the effective sample size per computational time unit, with PPLs showing advantages over traditional implementations. Additionally, we propose solutions to mitigate issues related to multi-modality in posterior distributions, such as initializing at the tail of the prior distribution, and weighted stacking [6] of chains for improved inference.

Discussion
Previous research on MCMC methods for Bayesian model inversion in DCM highlighted challenges with both gradient-free and gradient-based approaches [7, 8]. However, we found that the ability to combine probabilistic modeling with high-performance computational tools offers a promising solution to the challenges of high-dimensional, non-linear models in DCM. Future work should extend to whole-brain models and fMRI data, which pose additional challenges for both MCMC and variational methods.





Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project), No. 101137289 (Virtual Brain Twin Project), and government grant managed by the Agence Nationale de la Recherche reference ANR-22-PESN-0012 (France 2030 program).

References
[1]https://doi.org/10.1016/S1053-8119(03)00202-7
[2]https://doi.org/10.1016/j.neuroimage.2005.10.045
[3]https://doi.org/10.1080/01621459.2017.1285773
[4]https://doi.org/10.48550/arXiv.1111.4246
[5]https://doi.org/10.1145/2593882.2593900
[6]https://doi.org/10.48550/arXiv.2006.12335
[7]https://doi.org/10.1016/j.neuroimage.2015.03.008[8]https://doi.org/10.1016/j.neuroimage.2015.07.043


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P016: Heterogeneous topologies in in silico networks are necessary to model the emergent dynamics of human-derived fully excitatory neuronal cultures
Sunday July 6, 2025 17:20 - 19:20 CEST
P016 Heterogeneous topologies in in silico networks are necessary to model the emergent dynamics of human-derived fully excitatory neuronal cultures

Valerio Barabino*, 1, Francesca Callegari1, Sergio Martinoia1, Paolo Massobrio1, 2

1Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genova, Genova, Italy
2National Institute for Nuclear Physics (INFN), Genova, Italy


* Email:valerio.barabino@edu.unige.it

Introduction
Murine neuronal cultures have been the gold standard forin vitromodels, but their outcome is not always translatable to the human brain, especially in personalized medicine. Human-induced pluripotent stem cells (hiPSCs) offer a promising alternative [1]. This model requires extensive characterization, andin vitromulti-electrode arrays (MEAs) recordings alone may not capture all relevant parameters. Computational modeling can complement these experiments, offering insight into the mechanisms behind peculiar electrophysiological activities or pathological conditions [2]. This work aims to infer the underlying mechanisms behind the emergent firing profile pattern in excitatory hiPSC neuronal networks coupled to MEAs [3].
Methods
We modeled 100 Hodgkin-Huxley neurons with short-term depressing synapses. To reproduce self-sustained spontaneous activity observedin vitro, we introduced noise and external DC currents to allow for the alternation of two phases: short periods of high-frequency firing involving the whole network and long periods of asynchronous low-frequency spiking. We explored the role of external triggers, the interplay between synaptic conductances (AMPA and NMDA) and synaptic depression, and network topology in recreating the averagein vitrocumulative firing pattern. To account for the heterogeneity of biological networks, we introduced different connectivity rules, distinguishing between incoming and outgoing links.
Results
Noise emerged as the best trigger for network bursts, allowing a good balance between random spiking and bursting activity with anin vitro-like variability of inter-network burst intervals. Lower AMPA conductance than NMDA was necessary, as NMDA ensured a broader operability range forin vitro-like activity. The optimal trade-off between NMDA contribution and synaptic depression was found near a transition state, implying that small parameter changes can shift the system into different regimes. To shape cumulative firing pattern profiles, heterogeneous topologies were introduced, distinguishing afferent and efferent connectivity. The mostin vitro-like profile arose from scale-free afferent and random efferent connections.
Discussion
Consistent with previous studies [4], our findings suggest that the nature ofin vitrohiPSC network bursts is governed by a mechanism of noise amplification, controlled by a pulse of activity that is randomly nucleated and propagates throughout the network. Regarding connectivity, scale-free for afferents implies that a small subset of “privileged” neurons receives most of the inputs (hubs), thus acting as central regulators and influencing the network’s overall activity. Notably, these hubs exhibited more tonic firing, effectively acting as pacemakers that initiate network bursts, as similarly identified in [5]. However, in our case this property is structural and not an intrinsic dynamic property of single neurons.




Acknowledgements
The authors thank dr. Giulia Parodi (University of Genova) for supplying the hiPSCs recordings. This work was supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), Project MNESYS (PE0000006)—A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022).
References
1.https://doi.org/10.1016/j.stemcr.2021.07.001
2.https://doi.org/10.1101/2024.05.23.595522
3.https://doi.org/10.1088/1741-2552/acf78b
4.https://doi.org/10.1038/nphys2686

5.https://doi.org/10.1007/s00422-010-0366-x
Speakers
avatar for Paolo Massobrio

Paolo Massobrio

Associate Professor, Univeristy of Genova
My research activities are in the field of the neuroengineering and computational neuroscience, including both experimental and theoretical aspects. Currently, I am coordinating a research group (1 assistant professor, 2 post-docs, and 5 PhD students) working on the interplay between... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P017: Unraveling the neural mechanisms of behavior-related manifolds in a comprehensive model of primary motor cortex circuits
Sunday July 6, 2025 17:20 - 19:20 CEST
P017 Unraveling the neural mechanisms of behavior-related manifolds in a comprehensive model of primary motor cortex circuits

Roman Baravalle1*, Valery Bragin1,5, Nikita Novikov4, Wei Xu2, Eugenio Urdapilleta3, Ian Duguid2, Salvador Dura-Bernal1,4
1 Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, USA
2 Centre for Discovery Brain Sciences, University of Edinburgh, Edinburgh, UK
3 Centro Atómico Bariloche & Instituto Balseiro, Bariloche, Argentina
4 Center for Biomedical Imaging & Neuromodulation, The Nathan Kline Institute for Psychiatric Research
5 Brain Simulation Section, Charité - Universitätsmedizin Berlin, Berlin, Germany

*Corresponding Author: roman.baravalle@downstate.edu



Introduction
Accumulating evidence suggests that low-dimensional neural manifolds in the primary motor cortex (M1) play a crucial role in generating motor behavior. These latent dynamics, emerging from the collective activity of M1 neurons, are remarkably consistent across animals performing the same task. However, the specific cell types, cortical layers, and biophysical mechanisms underlying these representations remain largely unknown. Understanding these manifolds is essential for characterizing neural computations underlying behavior and has implications for developing stable and easy-to-train brain-machine interfaces (BMIs) for spinal cord injury.
Methods
We previously developed a realistic computational model of M1 circuits on NetPyNE/NEURON [1], incorporating detailed corticospinal neuron models responsible for transmitting motor commands to the spinal cord. This model was validated against in vivo spiking and local field potential data, demonstrating its ability to generate accurate predictions and provide insights into brain diseases. We further showed that M1 activity could be represented in low-dimensional manifolds, which varied according to behavioral states and experimental manipulations. These embeddings revealed clear clustering related to behavior and inactivation experiments (e.g., noradrenergic or thalamic input lesions), with high correlations between low- and high-dimensional representations.
Results
In this work, we extended the M1 model by incorporating two new interneuron types and tuning it to reproduce neural manifolds observed in vivo during a mouse joystick reaching task. Neuropixels probes recorded spiking activity in M1 and the ventrolateral thalamus, allowing us to jointly analyze neural patterns and joystick trajectories. We constructed a decoder using the CEBRA method [2] to predict movement trajectories from spiking activity and LFP and explored different model tuning strategies, including varying long-range inputs and modifying circuit connectivity via global optimization.
Discussion
Reproducing experimental behavior-related neural manifolds in large-scale cortical models enables linking neural dynamics across scales (membrane voltages, spikes, LFPs, EEG) to behavior, experimental manipulations, and disease. This approach helps refine models, characterize the relationship between latent dynamics and specific cell types, and ultimately deepen our understanding of how brain circuits generate motor behavior.
Acknowledgements
This work is supported by NIBIB U24EB028998 and NYS DOH1-C32250GG-3450000 grants


References
[1]https://doi.org/10.1016/j.celrep.2023.112574[2]https://doi.org/10.1038/s41586-023-06031-6



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P018: Orientation Bias and Abstraction in Working Memory: Evidence from Vision Models and Behaviour
Sunday July 6, 2025 17:20 - 19:20 CEST
P018 Orientation Bias and Abstraction in Working Memory: Evidence from Vision Models and Behaviour

Fabio Bauer*¹, Or Yizhar¹,², Bernhard Spitzer¹,²
¹Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development,
Berlin, Germany
²Technische Universität Dresden, Dresden, Germany*bauer@mpib-berlin.mpg.de
Introduction

Working memory (WM) for visual orientations shows behavioral bias, where remembered orientations are repelled from the cardinal axes. These canonical biases are well-documented for grating stimuli within 180° space[1-4]. WM maintenance of orientation information has been shown to involvelower-level visual processing [5-9]. However, in recent work we showed that orientation biases are also found with real-world objects in 360° space, which points to a high level of abstraction[10]. Can such abstraction and bias be explained by visual processing alone? Here, we examine if orientation biases for real-world objects emerge in computer vision models of the ventral visual stream [11,12] and compare them with behavioral reports in a WM task.
Methods
We compared activations from a range of neural network models: brain-inspired CNNs[13], established feedforward CNNs[14-16], and vision-transformers[17, 18]. Each model was shown 144 natural objects with a different principle axis (not rotationally symmetric), rotated in 16 orientations spanning 360°. We used representational similarity analysis to compare the models’ layer activations to idealized representations of bias in 180° and 360° orientation space. Results were compared with human behavioral reports from orientation WM tasks with the same kind of stimuli.
Results
Neural networks showed orientation biases in 180° space, which became stronger in deeper layers that have been suggested to model higher visual areas. In contrast, when analyzing the full 360° orientation space with natural objects, these same models showed no orientation bias at any layer. This failure across architectures reveals a fundamental limitation: while models can process orientation relationships in simple symmetric stimuli, they fail to recognize that differently shaped objects (like horizontal tables versus vertical towers) can share the same orientation. Our parallel human behavioral experiment showed that, unlike these models, people show orientation biases in working memory across the full 360° spectrum with similar natural objects.
Discussion
We found no evidence for a biased representation in 360° space in any layers of the vision models we tested. In contrast, human behavioral reports and eye-gaze patterns from WM experiments did show a clear 360° bias. This indicates that bias in our task emerges at the level of an abstracted stimulus feature (the object’s orientation relative to its real-life upright position), rather than low-level visual features. Our findings also suggest that with such real-world objects requiring abstraction, 360° orientation information is not represented in these most current models of visual processing. Future work should focus on validating these exploratory findings experimentally.



Acknowledgements
We acknowledge the Max Planck Institute for Human Development for providing computing resources and facilities. We also thank the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research (IMPRS COMP2PSYCH) for funding support. Additionally, we appreciate helpful discussions and comments from Felix Broehl and Ines Pont Sanchis.
References

1doi.org/10.1037/h0033117
2doi.org/10.1038/nn.2831
3doi.org/10.1167/10.10.6
4doi.org/10.1016/j.visres.2009.12.005
5doi.org/10.1038/nature07832
6doi.org/10.7554/eLife.94191.3
7doi.org/10.1371/journal.pbio.3001711
8doi.org/10.1080/13506285.2021.1915902
9doi.org/10.1101/2023.05.18.541327
10doi.org/10.1038/s41562-023-01737-z
11doi.org/10.1073/pnas.1403112111
12doi.org/10.1038/s41467-024-53147-y
13doi.org/10.1101/408385
14doi.org/10.1145/3065386
15doi.org/10.1109/cvpr.2017.634
16doi.org/10.48550/ARXIV.1409.155617doi.org/0.48550/ARXIV.2112.127501

18doi.org/10.48550/arXiv.2010.11929







Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P019: Distinguishing spatiotemporal scales within a connectome reveals integration and segregation efficiency in global patterns of neuronal activity
Sunday July 6, 2025 17:20 - 19:20 CEST
P019 Distinguishing spatiotemporal scales within a connectome reveals integration and segregation efficiency in global patterns of neuronal activity

Diego Becerra*1,2, Ignacio Ampuero1,2, Pedro Mediano3, Christopher Connor4, Andrea Calixto2, & Patricio Orio1,2

1 Valparaíso Neural Dynamics Laboratory, Faculty of Sciences, Universidad de Valparaíso, Chile
2 Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Chile
3 Department of Computing, Imperial College London, United Kingdom
4 Brigham & Women’s Hospital, Harvard Medical School, Boston, USA.


* Email: becerra.q.diego@gmail.com
Introduction

Neurons in the brain communicate in different ways, and thus connectomes can be conceived as overlapping dissimilar networks depending on the type of signal being transmitted. One crucial difference between modes of signalling is given by the connectivity timescales, yielding four layers of paths (ordered from fastest to slowest): gap junctions, amino acid, monoaminergic, and peptidergic transmitters.Caenorhabditis elegans, a 302-neuron nematode, is an excellent model for exploring both topological and functional properties of the interaction between layers of networks, because its full connectome is known, alongside a lot of its electrophysiology.

Methods
A full structural connectome ofC. eleganswas built from the latest empirical works available. Functional connectomes were built from twoC. elegans‘whole-brain’ calcium imaging datasets of global states: npr-1mutants undergoing quiescence and wakefulness [2]; and QW1217 mutants undergoing 4% and 8% of isoflurane anesthesia [1].
We analyzed integration and segregation applying network topology measures to the datasets and to the connectome layers. Also, we developed a partial network decomposition [PND] algorithm which analyzes the shortest path between nodes of a pair of overlapping networks. We then compared the network properties of theC. elegansconnectomes with lattiziced and randomized surrogates.
Results
While the peptidergic connectome is dense, the others are sparse. Applying PND, we determined if a path between nodes is redundant, uniquely contributed, or synergistic for a pair of spatiotemporal scales. Unique paths are predominant in all pairs of scales, the highest redundancy is found between electrical and amino acid transmission, and the highest synergy is between electrical and monoaminergic.

Empirical pairs of connectomes are more synergistic than their latticized or randomized surrogates, suggesting that the empirical network yields an improvement in efficiency. Comparing segregation and integration between structural (SC) and functional (FC) connectivity, FC of asleep and anesthetized worms is closer to SC than the FC of awake worms.
Discussion
We were able to characterize complementary (synergistic), redundant, and unique paths between nodes of the connectomes. Yet, the recent integration of gene-expression datasets and ligand-receptor interaction shows a pervasive extra-synaptic transmission network. To discover the effect of differences in connectivity density between peptidergic and the other scales of neurotransmission require thus including the temporal dimension: both by using empirical ‘whole-brain’ datasets portraying different global states (wakefulness, sleep, anesthesia) and a mathematical model using the full topology of the network, which is in process of being implemented.



Figure 1. Fig. 1. (A) Center: Shortest paths of the empirical connectome layers favor synergy as distance between nodes increase, when compared to latticized (left) and randomized (right) versions. Binarizing top Pearson correlations of awake vs. asleep (B); and anesthetized vs. awake (C) timeseries show that structural segregation (left cols.) and integration (right cols.) values are closer to asleep ones.
Acknowledgements
Fondo Nacional de Desarrollo Científico y Tecnológico (FONDECYTPatricio Orio grant number 1241469; ANID-Basal: Patricio Orio grant number AFB240002; ANID-Doctoral Fellowship: Diego Becerra 21210914.
References
1. Chang, A. S., Wirak, G. S., Li, D., Gabel, C. V., & Connor, C. W. (2023).Measures of Information Content during Anesthesia and Emergence in the Caenorhabditis elegans Nervous System.Anesthesiology, 139(1), 49–62.https://doi.org/10.1097/ALN.0000000000004579
2. Nichols, A. L. A., Eichler, T., Latham, R., & Zimmer, M. (2017).A global brain state underlies C. Elegans sleep behavior.Science, 356(6344), 1247–1256.https://doi.org/10.1126/science.aam6851


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P020: A Multi-Compartment Computational Approach to Cerebellar Circuit Dysfunction in Autism
Sunday July 6, 2025 17:20 - 19:20 CEST
P020 A Multi-Compartment Computational Approach to Cerebellar Circuit Dysfunction in Autism

Danilo Benozzo*1, Martina F. Rizza1, Danila Di Domenico1, Giorgia Pellavio1, Filippo Marchetti1, Egidio D’Angelo1,2, Claudia Casellato1

¹ Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
² Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: danilo.benozzo@unipv.it
Introduction

Modeling brain dynamics requires addressing processes that span different temporal and spatial scales [1]. This is crucial, when studying phenomena at the network level that are consequences of pharmacological or pathological alterations occurring at the single-cell level, such as changes in ionic or synaptic currents. Our aim is to study how single-cell dynamics affect circuit dynamics in a mouse model of autism (IB2 knock-out, KO), within the context of the cerebellar cortical microcircuit. Cerebellar implications in autism spectrum disorders (ASD) have been well-documented, showing a dependent association between cerebellar damages and an increased risk for ASD [2].
Methods
We re-parameterized a wild-type (WT) granule cell (GrC) multi-compartment model [3] to match the empirical properties of IB2-KO GrCs [4]. At the network level, we employed a bottom-up approach, by placing all the cell types that characterize the cerebellar cortex, preserving their physiological morphology, density and connection affinity. On the simulation side, the activity of each cell type was reproduced through a multi-compartment model interfaced with the NEURON simulator [5]. The entire process was managed by the Brain Scaffold Builder framework [6,7].
Results
In the WT GrC model we increased Na and K maximum conductances (gmax) to match the higher in/outward currents in IB2 GrCs. Tonic glutamate levels [glu] in mossy-fiber-GrC synapses and NMDA gmax were adjusted to replicate experimental I-f and NMDA currents, predicting [glu] at 11.2 µM and a 4x NMDA gmax increase. The IB2 GrC model was integrated into the canonical cerebellar circuit, assuming no other cell changes (empirical IB2 Purkinje cell (PC) spks/s = 51.8, std 11.7, no sign. vs. WT). Network comparisons revealed greater stimulus spread through the Gr-layer, Fig.1B. Fig.1C shows peri-stimulus histograms for both circuits under different input, predicting an overall firing increase (rates from 9.5x in GrCs to 1.6x in PCs).
Discussion
This bottom-up modelling framework enabled us to construct a representative microcircuit of the mouse cerebellar cortex, featuring a granular layer that replicates the alterations empirically observed in the IB2-KO model. This multiscale approach allows us to predict how the circuit dynamics respond to single-cell model modifications. In the granular layer, our results reflect the spatially expanded, higher E/I balance around IB2-KO GrCs observed in [4]. To further validate whole-circuit activity, we are currently comparing our predictions with in vitro MEA recordings from both WT and IB2-KO mice, in spontaneous regime and under mossy-fiber impulse stimulations.



Figure 1. A: Effect of NMDA gmax​ (kgmax​​ NMDA) and ambient glutamate ([glu]) on NMDA currents in IB2-KO GrCs. B: How a 20 Hz stim propagates through the granular layer, applied to mossy fibers (mfs) within the circular target, r=40 µm. C: Firing rates of each cell type under three conditions: no stimulus, an 8 Hz Poisson basal input to all mfs, and basal input plus high-frequency stim targeted to 15 mfs.
Acknowledgements
Work supported by NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) – A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022)
References
[1]https://doi.org/10.1162/netn_a_00343
[2]https://doi.org/10.1016/j.ijdevneu.2004.09.006
[3]https://doi.org/10.3389/fncel.2017.00071
[4]https://doi.org/10.1523/jneurosci.1985-18.2019
[5]https://doi.org/10.1017/cbo9780511541612
[6]https://doi.org/10.1038/s42003-022-04213-y[7]https://ebrains.eu/service/brain-scaffold-builder/


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P021: Studying the effects of TMS on the cerebellum with a realistic model of the cerebellar cortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P021 Studying the effects of TMS on the cerebellum with a realistic model of the cerebellar cortex

Eleonora Bernasconi*1, Nada Yousif1, Volker Steuber1

1Biocomputation Research Group, University of Herfordshire, Hatfield, United Kingdom

*Email: e.bernasconi@herts.ac.uk
Introduction

Transcranial magnetic stimulation (TMS) has been used for over 30 years to modulate cortical excitability and it is currently being applied to other brain regions, such as the cerebellum[1,2]. TMS is a promising technique that could be beneficial for people suffering with dystonia, essential tremor, and Parkinson’s disease[1,2]. However, research in this field provides contrasting evidence of the effects of TMS on the cerebellum[1,3]. Our goal is to study the underlying mechanisms of TMS on the cerebellum via a computational approach.

Methods
We stimulated a previously developed model of the cerebellar cortex consisting of granule, Golgi and Purkinje cells[4]. To ensure uniform stimulus application, we replaced the granule cells with a multi-compartmental model by Diwakar et al[5]. We applied the stimulus as a voltage using the extracellular mechanism in NEURON [6], which requires multi-compartmental models. We stimulated all compartments of all neurons with a sinusoidal waveform, where field decay is only dependent on the distance between the source of the applied electric field (located at the origin) and the stimulated compartment[7]. We tested 6 stimulus frequencies commonly used in TMS protocols on the cerebellum: 1, 5, 10, 20 and 50 Hz.
Results
For stimulus frequencies up to 20 Hz, the firing rate of the Purkinje cell oscillates in response to the sinusoidal stimulus, as expected (Figure 1A-C). During the positive phase of the stimulus, the cell’s soma hyperpolarizes, while during the negative phase, it depolarizes.
Increasing the stimulus frequency up to 20 Hz amplifies the modulation. The variance of the cell’s instantaneous firing rate is 0.4, 5.9, 18.9, 39.0 and 29.6 Hz2for stimulus frequencies of 1, 5, 10, 20 and 50 Hz.
At 50 Hz, the cell’s instantaneous firing rate no longer follows the stimulus waveform, and instead exhibits a pronounced excitation with weaker inhibition (Figure 1D). This excitation is much stronger than that obtained at lower stimulus frequencies.
Discussion
The behaviour of our Purkinje cell model aligns with the findings of Rattay et al [8], suggesting that our model can serve as a useful tool to study how TMS influences cerebellar activity.

We show that stimulus frequency can significantly impact the cell’s behaviour, highlighting the importance of carefully selecting this parameter in clinical settings. High-frequency stimulation exerts a strong excitatory influence, which may have important implications for therapeutic use.Future work will extend the simulation model to the granule and Golgi cells. We plan to stimulate the network with a more realistic electric field generated using realistic anatomical head models. We will derive the electric field distribution employing SimNIBS[9].



Figure 1. Figure 1: Instantaneous firing rate of the Purkinje cell (in blue) and waveform used to stimulate the cell (in orange). The amplitude of the pulse waveform is not to scale. The stimulus applied has a frequency of 5, 10, 20 and 50 Hz in figures A, B, C and D.
Acknowledgements
-
References
[1]https://doi.org/10.1016/j.brs.2017.11.015
[2]https://doi.org/10.1016/j.neubiorev.2017.10.006
[3]https://doi.org/10.3389/fnagi.2020.578339
[4]https://doi.org/10.1007/s10827-024-00871-5
[5]https://doi.org/10.1017/10.1152/jn.90382.2008
[6]https://doi.org/10.1017/CBO9780511541612
[7]https://doi.org/10.1017/10.1007/978-3-031-08443-0_8
[8]https://doi.org/10.1109/TBME.1986.325670
[9]https://doi.org/10.1007/978-3-030-21293-3_1




Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P022: Pattern mismatch detection by transient EI imbalance on single neurons: Experiments and multiscale models.
Sunday July 6, 2025 17:20 - 19:20 CEST
P022 Pattern mismatch detection by transient EI imbalance on single neurons: Experiments and multiscale models.

Authors:Aditya Asopa1andUpinder S. Bhalla1*


1NCBS-TIFR, Bangalore, India


*Email: bhalla@ncbs.res.in
Introduction

Changes in repetitive stimuli typically signal important sensory events, and many organisms exhibit mismatch detection through behavioral and physiological responses.Mismatch detection is a fundamental sensory and computational function, bringing attention and neural resources to bear on novel inputs. Previous work[1,2] suggests that sensory adaptation mediated by short-term plasticity (STP) may be a mechanism for mismatch detection, however this does not factor in details of excitatory-inhibitory (EI) balance, network connectivity, and time-courses of E and I inputs.

Methods
We performed optogenetic stimulation of CA3 pyramidal neurons in acute mouse hippocampal brain slice to provide precise spatial and temporal patterns of activity as proxies for input ensembles. We monitored E and I synaptic input in postsynaptic CA1 pyramidal neurons using voltage clamp at I and E reversal potentials to separate the respective contributions. We used time and space patterns to parameterize a multiscale model of CA3 neurons, interneurons, and hundreds of synaptic boutons with independent stochastic chemical signaling controlling synaptic release onto a postsynaptic CA1 neuron. Simulations were performed using the MOOSE simulator[3].
Results
We parameterized the model in three stages. First, we built a Ca2+-triggered 4-step presynaptic release model which (with different parameters) could be applied both to E and I synapses using voltage-clamp recordings over a burst. Second, we fit CA1 neuronal and synaptic properties to burst synaptic input at different frequencies. Third, we fit CA1 readouts of Poisson trains of optical patterned input at CA3, to constrain network parameters. This model predicted that transitions in spatially patterned input sequences, such as AAAABBBBCCCC, could be detected by the network. We confirmed this experimentally. Finally, we showed that spiking CA1 neurons had even sharper mismatch tuning and could detect pattern transitions between theta bursts.
Discussion
EI balance controls neuronal excitability both across time-scales, and across strength and patterns of input[4]. To this we add the dimension of plasticity at short-time-scales (~100 ms) relevant for mismatch detection[1] and sensory sampling coupled to the theta rhythm. We provide an experimentally tuned open-sourced resource of a CA3-CA1 model of input-output relationships down to the molecular level, which is lightweight enough to run on a laptop at only ~20x real time. We propose that a transient tilt in EI balance is a more nuanced, biochemically and biophysically based mechanism for mismatch detection, and accounts for numerous observations of timing, intensity, and circuit configurations.




Acknowledgements
AA and USB are at NCBS-TIFR which receives the support of the Department of Atomic
Energy, Government of India, under Project Identification No. RTI 4006. The study received funding from SERB Grant CRG/2022/003135-G.
References
1: https://doi.org/10.1016/j.clinph.2008.11.029
2: https://doi.org/10.1111/j.1469-8986.2005.00256.x
3: https://doi.org/10.3389/neuro.11.006.2008
4: https://doi.org/10.7554/eLife.43415
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P023: Applying a Machine Learning Method to Detect Miniature EPSCs
Sunday July 6, 2025 17:20 - 19:20 CEST
P023 Applying a Machine Learning Method to Detect Miniature EPSCs

Krishan Bhalsod*1, Cengiz Günay1


1Dept. Information Technology, Georgia Gwinnett College, Lawrenceville, Georgia, USA

*kbhalsod@ggc.edu
Introduction

This study aims to use machine learning to find miniature excitatory postsynaptic currents (EPSCs) in neurons
of aDrosophilato find behavior markers of a seizure. Using MATLAB, we are training a machine learning model
on electrophysiological data to recognize patterns of postsynaptic events that show potential seizure activity. We have faced challenges applying this method and we are planning to present these in our poster. The results of this research may help develop a further understanding of seizure mechanisms inDrosophilathat could translate into a more in-depth understanding for neurological disorders in humans.

Methods
The particular type of data we are addressing is obtained from intracellular recordings of invertebrate neurons, specifically fromDrosophila(fruit fly) motor neurons [1]. Not only these recordings have low SNR, but also the miniature excitatory postsynaptic current (EPSC) (or ”mini”) event we are looking for come in various magnitudes due to the distance from the event’s origin on the neuron’s morphology. In the present work, our aim is to adapt a novel machine learning and optimal filtering method (MOD) to automatically detect these minis [2].
Results
The purpose of MOD is to generate a filter that takes the original data and it removes any noise, turning it into a
raw detection trace that closely mirrors the manual scoring trace. The method leverages the Wiener-Hopf equations to derive an optimal filter for detecting post-synaptic events. In the MATLAB code, the optimal-filter equations are directly implemented. First, the program estimates the auto-correlation and cross-correlation from the training data to build a Toeplitz matrix Ry, and then it solves for the filter coefficients a. To correct for any timing differences between the recorded signal and the manual scoring, the algorithm computes filter coefficients for several time shifts and selects the delay that yields the best detection performance (e.g., highest AUC). Finally, a low-pass Hann window filter is applied to smooth the detection trace.
Discussion
The challenges of machine learning come with filtering noise. Typically, in electrophysiological recordings, the lines are not smooth. Therefore, we applied a bandpass filter of 1-1,000 Hz to reduce the noise. However, we face a problem where the signal oscillates and eventually forms flat lines, which is most likely caused by the filtering algorithm removing low-magnitude events. Because of this, the machine learning model faces difficulties learning from the filtered signal, thus failing to recognize events because the threshold is no longer high enough to flag the event.



Figure 1. Example of recording where blue shaded areas highlight mini events. Time units in seconds. Y-axis units in pA.
Acknowledgements
The recordings used in this work were provided by Richard Baines from University of Manchester. We are grateful for providing student travel funding to Dr. Joseph Ametepe, Dean of the School of Science and Technology, and Dr. Sean Yang, Chair of the Information Technology Department at Georgia Gwinnett College. Students Jonathan Tran and Niecia Say provided valuable feedback for this project.
References
1. C. N. G. Giachello and R. A. Baines. Inappropriate neural activity during a sensitive period
in embryogenesis results in persistent seizure-like behavior. Curr Biol, 25(22):2964–2968, Nov 2015. doi: 10.1016/j.cub.2015.09.040.
2. X. Zhang, A. Schlögl, D. Vandael, and P. Jonas. MOD: A novel machine-learning optimal-filtering method
for accurate and efficient detection of subthreshold synaptic events in vivo. Journal of Neuroscience Methods, 357:109125, 2021. ISSN 0165-0270. doi: https://doi.org/10.1016/j.jneumeth.2021.109125.

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P024: Analysis of autoassociative dynamics in the hippocampus through a full-scale CA3-CA1 model
Sunday July 6, 2025 17:20 - 19:20 CEST
P024 Analysis of autoassociative dynamics in the hippocampus through a full-scale CA3-CA1 model

Giulia M. Boiani1,2*, Serena Giberti2, Lorenzo Tartarini3, Giampiero Bardella4, Sergio Solinas5, Stefano Ferraina4, Michele Migliore2, Jonathan Mapelli3, Daniela Gandolfi1


1Dipartimento di Ingegneria "Enzo Ferrari", Università degli Studi di Modena e Reggio Emilia, Italy
2CNR, Istituto di Biofisica, Palermo, Italy
3Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze, Università degli Studi di Modena e Reggio Emilia, Italy
4Dipartimento di Fisiologia e Farmacologia, Sapienza, Università di Roma, Roma, Italy
5Dipartimento di Scienze Biomediche, Università di Sassari, Sassari, Italy


*Email: giuliamaria.boiani@unimore.it

IntroductionThe hippocampus is a key brain structure for memory formation and spatial navigation. We present a full-scale point-neuron realistic model of the mouse CA3-CA1 network [1]. The structural validity of the network has been assessed by applying graph theory, whereas functional validation has been performed by incorporating a parameterized point-neuron and a custom-developed synapse with short- and long-term synaptic plasticity. We demonstrated the ability of the modeled CA3 to operate as an autoassociative network that can reconstruct complete memories from partial clues [2]. These results confirm the role of CA3 in pattern completion and provide a benchmark to investigate information processing in the hippocampal formation.Methods
The network connectivity has been obtained by adopting a morpho-anatomical strategy based i) on the intersection of abstract geometrical morphologies and ii) by extending the tubular structures of CA3 pyramidal cell (PC) axons to target CA1 PCs while accommodating the hippocampal anatomy [3]. The custom-developed synapse was implemented through NESTML and included short-term dynamics and long-term STDP[4]. The autoassociativity was investigated by applying a theta-gamma stimulation protocol to train a subset of 400 out of 4000 CA3 PCs. The network’s tendency to balance local interconnectedness (clustering) and efficient information routing (short path lengths) was assessed using a key graph-theoretic metric: the small-world coefficient.
Results
The CA3-CA1 network (Fig 1A-B) showed an outdegree (Fig 1C) distribution compatible with experiments. Interestingly, CA1 and CA3 exhibited distinct connectivity profiles: a hub-like organization potentially facilitating the integration of information in the CA1, and a nearly fully connected hub-less architecture in the CA3 consistent with its role in pattern completion. Moreover,CA3 showed a high clustering coefficient (Fig 1D), while both regions exhibited small-world properties, with CA3 having a higher value (Fig 1E).Autoassociativity test showed that CA3 (Fig 1F) can indeed retrieve complete memories upon presentation of degraded inputs and complete retrieval occurred when at least 20% of trained neurons were stimulated (Fig 1G).
Discussion
These results validate the accuracy of the model. The network can perform pattern completion effectively exploiting autoassociativity. The analysis of the network's topology suggests that CA1 acts as a hub-like connector, while CA3 shows signatures of small-worldness with an efficient architecture balancing local segregation and global reach. Our biologically realistic network exhibits a non-trivial topology allowing the emergence of functional properties, which could be altered in pathological conditions together with topology [5]. These results offer insights into the functions of hippocampal circuitry, paving the way to the use of computational models to investigate physiological and pathological conditions.



Figure 1. A CA3-CA1 scaffold B Simulation activity snapshots C CA3 and CA1 outdegree distributions. D Clustering coefficients of CA3 and CA1 networks compared to equivalent Erdős–Rényi (ER) and Watts–Strogatz (SW) null models. E Small-World Coefficients. F Neuronal activation over time in recall tests with varying fractions of stimulated neurons. G Evaluation of recall performance.
Acknowledgements
The University of Modena and Reggio Emilia FAR-DIP-2024-GANDOLFI E93C24000500005 to DG. The Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union – NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy” and Project PE0000006, “MNESYS” to JM), the Ministry of University and Research, PRIN 2022ZY5RXB CUP E53D2301172 to JM.
References
[1]https://doi.org/10.1038/s41598-022-18024-y


[2]https://doi.org/10.1007/s10827-024-00881-3


[3]https://doi.org/10.1038/s43588-023-00417-2


[4]https://doi.org/10.1038/ncomms11552


[5]https://doi.org/10.1016/j.clinph.2006.12.002
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P025: The opportunities and limitations of AI as a tool in neuroscience: how does the nose know what it knows?
Sunday July 6, 2025 17:20 - 19:20 CEST
P025 The opportunities and limitations of AI as a tool in neuroscience: how does the nose know what it knows?

James M. Bower
Biome Farms, Veneta Oregon
Introduction

There has been a dramatic increase in the use of AI in the analysis of neurobiological data.For example, recently a graphical neural network trained to predict odor percepts from molecular structures has suggested that olfactory discrimination may be based on the metabolic relationships between molecules rather than their physio-chemical structures (Qian et al., 2023).Although these authors were unaware, Chris Chee in my laboratory had discovered the same result 25 years earlier using a different kind of analysis of olfactory perception (Ruiter-Chee, 2000).This talk will describe each approach, the neurobiological significance of the results, then considering the value and limitations of AI as an abstract data analysis tool.


Methods
In the first study, a graphical neural network constructed a map from molecular structure to odor descriptors, the results tested comparing model results to those of human experts (Lee et al., 2023). Chemical relationships within the AI produced odor map were then examined (Qian et al., 2023).The second approach used a cross-entropy analysis of the co-occurrence of individual descriptors in the human identified profiles of 822 molecules.The resulting directed graph was then analyzed for the locations of odorants containing nitrogen and sulfur (Chee-Ruiter, 2000).




Results
Both studies suggest that human olfactory discrimination reflects the metabolic relationships between molecules rather than their strict physio-chemical properties.Metabolically related but structurally dissimilar molecules were grouped together in the AI generated map, while molecules containing sulfur and nitrogen co-localized in the directed graph.While both studies reached similar conclusions, the cross-entropy analysis lead directly to further studies of the binding properties of olfactory receptors as well as realistic modeling studies of the organization of efferent and intrinsic pathways within olfactory cortex, both suggesting that the olfactory system intrinsically “knows” about the metabolic structure of the world.

Discussion
The results of both studies suggest that the assumption, first proposed by the Roman poet and philosopher Lucretius who in 50 B.C.E, that the olfactory system recognizes and categorizes odorant molecules based on their general physio-chemical properties is fundamentally wrong. Accordingly at a minimum the physio-chemically organized (i.e. carbon length chain) panels of odor stimuli traditionally used in olfactory experiments are unlikely to reveal how the olfactory system works. Beyond that, however, the additional studies conducted in our laboratory call into question whether the neurobiological basis for olfactory discrimination, for example, is learned or intrinsic, a question that cannot be addressed by the AI model.



Acknowledgements
d
I acknowledge the alpacas, emus, and horses that watch me very day as I work on my books and papers. Otherwise, I am completely self funded, as I am simulating an 18th century landed gentry scientist.
References
Bailey, C., (1959)Lucreti De Rerum Natura Libri Sex,2nd edition (Oxford Press)

Chee- Ruiter, CWJ. 2000. The biological sense of smell: olfactory search behavior and a metabolic view for
olfactory perception. Dissertation (Ph.D.), California Institute of Technology

Lee, B.K. et al. (2023) A principal odor map unifies diverse tasks in olfactory perception. Science 381: 999.

Qian, W.W, et al. (2023) Metabilic activity organizes olfactory representations Elife 2023;12:e82502
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P026: Prospective and retrospective coding in cortical neurons
Sunday July 6, 2025 17:20 - 19:20 CEST
P026 Prospective and retrospective coding in cortical neurons

Simon Brandt1, Paul Haider*,1, Mihai A. Petrovici1, Walter Senn1, Katharina A. Wilmesa,1, Federico Beniteza,1

1Department of Physiology, University of Bern, Switzerland
ashared senior authorship

*Email: paul.haider@unibe.ch

Introduction
Brains can process sensory information from different modalities at astonishing speed; this is surprising as already the integration of inputs through the membrane causes a delayed response Fig. 1d. Experiments reveal a possible explanation for the fast processing, showing that neurons can advance their output firing rate with respect to their input current, a concept which we refer to as prospective coding [1]. The combination of retrospective (a delayed response) and prospective coding enables neurons to perform temporal processing. While retrospective coding emerges from the inherent delays of neurons, the underlying mechanisms of prospective coding, however, are not completely understood.

Methods
In this work, we elucidate cellular mechanisms which can explain prospective firing in cortical neurons. We use simulation-based inference to investigate the parameters of the Hodgkin-Huxley model [2] with respect to its ability to fire prospective and retrospective. Based on this analysis, we derive a reduced model that allows us manipulate the temporal response of the Hodgkin-Huxley neurons. Further on, we derive rate based models of neurons which include adaption processes on arbitrary time scales to investigate advances on longer time scales.


Results
We show that the spike generation mechanism can be the source for the prospective (advanced) or retrospective (delayed) response as shown for prospective firing (Fig. 1a) in cortical-like neurons [3,4] (Fig. 1b, green) and retrospective firing (Fig. 1c) in hippocampal-like neurons [5,6] (Fig. 1b, orange). Further, we analyse the Hodgkin-Huxley dynamics to derive a reduced model to manipulate the timing of the neuron’s output by tuning three parameters (Fig. 1d-h). We further show that slow adaptation processes, such as spike-frequency adaptation or deactivating dendritic currents, can generate prospective firing for inputs that undergo slow temporal modulations. In general, we show that adaptation processes at different time scales can cause advanced neuronal responses to time-varying inputs that are modulated on the corresponding time scales.


Discussion
The results of this work contribute to the understanding of how fast processing (prospective coding) and short-term memory (retrospective coding) can be achieved in the brain on the level of single neurons and might guide further experiments. Prospectivity and retrospectivity may be important for several cognitive functions. The interplay of the two provides a powerful framework for temporal processing by shifting signals in time. The insights are highly beneficial for biologically plausible learning algorithms used for temporal processing and their implementation on neuromorphic hardware [7-9].
Figure 1. (a) Hodgkin-Huxley neurons can be prospective for cortical neurons (b, green) and retrospective (c) for parameters fitted to hippocampal neurons (b, orange). (d) Because a neuron integrates input through its membrane, a response of the neuron is expected to be delayed by the membrane. If the output of a neuron can be advanced with respect to its input, a prospective mechanism needs to exist. With
Acknowledgements
We would like to express particular gratitude for the ongoing support from the Manfred Stärk Foundation. Our work has greatly benefited from access to the Fenix Infrastructure resources, which are partially funded through the ICEI project under the grant agreement No. 800858. This includes access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland.
References

1.https://doi.org/10.1093/cercor/bhm235
2.https://doi.org/10.1113/jphysiol.1952.sp004764
3. https://doi.org/10.1017/CBO9781107447615
4. https://doi.org/10.1016/0896-6273(95)90020-9
5.https://doi.org/10.1007/s10827-007-0038-6
6.https://doi.org/10.1017/CBO9780511895401
7.https://doi.org/10.7554/elife.89674
8.https://doi.org/10.48550/arXiv.2110.14549
9. https://doi.org/10.48550/arXiv.2403.16933


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P027: A functional network model for body column neural connectivity in Hydra
Sunday July 6, 2025 17:20 - 19:20 CEST
P027 A functional network model for body column neural connectivity in Hydra

Wilhelm Braun*1,2, Sebastian Jenderny4, Christoph Giez5,6, Dijana Pavleska5, Alexander Klimovich5,

Thomas C.G. Bosch5, Karlheinz Ochs4, Philipp Hövel7, Claus C. Hilgetag1,3


1Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany


2 Faculty of Engineering, Department of Electrical and Information Engineering, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany


3Department of Health Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA, 02215, USA


4Chair of Digital Communication Systems, Ruhr-Universität Bochum, Universitätsstraße 150, 44801, Bochum, North Rhine-Westphalia, Germany


5Zoological Institute, University of Kiel, Christian-Albrechts-Platz 4, 24118 Kiel, Germany


6The Francis Crick Institute, London NW1 1BF, UK


7Theoretical Physics and Center for Biophysics, Saarland University, Campus E2 6, Saarbrücken, 66123, Germany


*Email: wilhelm_braun@icloud.com



Introduction

Hydrais a non-senescent animal with a relatively small number of cell types and overall low structural complexity, but a surprisingly rich behavioral repertoire. The main drivers ofHydra’s behavior are neurons that are arranged in two nerve nets comprising several distinct neuronal populations. Among these populations is the ectodermal nerve net N3 which is located throughout the animal. It has been shown that N3 is necessary and sufficient for the complex behavior of somersaulting [1] and is also involved inHydrafeeding behavior [2, 3]. Despite being a behavioral jack-of-all-trades, there is insufficient knowledge on the coupling structure of neurons in N3, its connectome, and its role in activity propagation and function.


Methods
We construct a model connectome for the part of N3 located on the body column by using pairwise distance- and connection angle-dependent connectivity rules. Using experimental data on the placement of neuronal somata and the spatial dimensions of the body column, we design a generative network model combining non-random placement of neuronal somata and the preferred orientation of primary neurites. Additionally, we study activity propagation in N3 using the simple excitable Susceptible-Excited-Refractory (SER) model and a more complex neuromorphic Morris-Lecar model.


Results
We show [4] that the generative network model yields good agreement with experimental data. We then show that the simple excitable dynamical SER model generates directed, short-lived, fast propagating patterns of activity. In addition, by slightly changing the parameters of the dynamical model, the same structural network can also generate persistent activity. Finally, we use a neuromorphic circuit based on the Morris-Lecar model to show that the same structural connectome can, in addition to through-conductance with biologically plausible time scales, also host a dynamical pattern related to the complex behavioral pattern of somersaulting.


Discussion
Our work provides a systematic construction of the structure of a subnetwork ofHydra’snervous system. By assuming that the ectodermal body column network inHydrais essentially two-dimensional, we designed a generative network model that is in agreement with measured
structural quantities and supports two different activity modes, each presumably controlling
different types of behavior inHydra. We speculate that such different dynamical regimes act as dynamical substrates for the different functional roles of N3, allowingHydrato exhibit behavioral complexity with a relatively simple nervous system that does not possess modules or hubs.






Acknowledgements
WB would like to thank Kayson Fakhar, Fatemeh Hadaeghi and Mariia Popova for helpful discussions. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 434434223 – SFB 1461.

References
[1]10.1016/j.cub.2023.03.047
[2]10.1016/j.cub.2023.10.038
[3]10.1016/j.celrep.2024.114210
[4]10.1101/2024.06.25.600563

Speakers
avatar for Wilhelm Braun

Wilhelm Braun

Junior Research Group leader, CAU Kiel, Department of Electrical and Information Engineering
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P028: The Topological Significance of Functional Connectivity State Transitions
Sunday July 6, 2025 17:20 - 19:20 CEST
P028 The Topological Significance of Functional Connectivity State Transitions

Spencer Brown¹, Celine Zalamea², DanielSelski, PhD3

¹ College of Osteopathic Medicine, Pacific Northwest University of Health Sciences, Yakima, United States of America
² College of Osteopathic Medicine, Pacific Northwest University of Health Sciences, Yakima, United States of America
³College of Osteopathic Medicine, Pacific Northwest University of Health Sciences, Yakima, United States of America

Email:smbrown@pnwu.edu

Introduction

Pathology in dynamic functional connectivity is well documented but lacks explanatory rationale.Dynamicfunctional connectivityparallels the existence of discrete state transitions,the study ofwhich may provide further elucidation. In this study, weutilizedTopological Data Analysis (TDA) toobservethe shape of whole brain networks and the transitions between them. Further, we aim to understand thestructure-function relationship of brain states with neuronal physiology.
Methods
fMRI scans were obtainedand preprocessedfrom the Human Connectome Project motor task dataset.States were definedrelativeto an anticipatory visual cue. We initially used the Euclidian distance (L2) toclusterstateshierarchically.We converted brain states to Vietoris-Rips complexestoidentifytopology.Distances between these complexes were then measured using the Wasserstein distanceandaggregatedusinghierarchicalclustering.
Results
Wedemonstratethateach L2 state label mustonly havea single topology, but the same topology may exist in multiple states.Under this assumption, many combinations of L2 and topology wereobservedto be invalid.Thisisreconciled by anintrinsichierarchy of brain states.
Discussion
Weobservethatbrainstates maybedrastically different networks but share the same topology. For instance,resting versus task states mayexhibitthesame topology.In contrast, similar states may also differ in topology.We findthat topology may have a unique role in neuronal physiology andprovidesa potential framework for further studies that explorebrain dynamics.



Acknowledgements
N/A.
References
N/A.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P029: Spiking Neural Networks for Controlling a Biomechanically Realistic Arm Model
Sunday July 6, 2025 17:20 - 19:20 CEST
P029 Spiking Neural Networks for Controlling a Biomechanically Realistic Arm Model

Philip Bröhl*1,2, Junji Ito2, Ira Assent1,3, Sonja Grün2,4,5

1Institute for Advanced Simulation (IAS-8), Research Center Juelich, 52425 Jülich, Germany
2 Institute for Advanced Simulation (IAS-6), Research Center Juelich, 52425 Jülich, Germany
3Department of Computer Science, Aarhus University, 8200 Aarhus N, Denmark
4JARA Institute Brain Structure Function Relationship, Research Center Juelich, 52425 Jülich, Germany
5 Theoretical Systems Neurobiology, RWTH Aachen Univ., Aachen, Germany

*Email: p.broehl@fz-juelich.de
Introduction


A typical feature of neurons in the motor cortex of mammalian brains is that they are tuned to a particular direction of movements, i.e. they exhibit most spikes when a body part is moved in a particular direction, called preferred direction (PD). It has been reported that the distribution of preferred directions among motor cortex neurons depends on the constraints in the movements: when the arm may move freely in 3D, it is uniform [1,2], but when it is constrained to a 2D movement, it is bimodal [3,4]. In this work, we aim at revealing the neuronal mechanism underlying the emergence of a bimodal PD distribution, by studying an artificial network of spiking neurons trained to control a biomechanically realistic arm model.
Methods

Our model is implemented in Tensorflow [5] and consists of 300 recurrent leaky integrate-and-fire neurons with 6 linear readout neurons that control the 6 muscles in a biomechanical arm model [4]. We train it to output muscle activation signals to perform a 2D reaching task. We study its output space by applying a Principal Component Analysis on the outputs and relate the directions in this space to the directions in the joint angle arm acceleration space via Canonical Correlation Analysis. We also study the effect of each recurrent neuron on the output dynamics by interpreting its outgoing connection weights as a direction in the space of the recurrent dynamics and projecting this direction onto the output space via the readout weights.
Results

The model neurons show directional tuning with bimodally distributed PDs. The output dynamics of the model are well captured by the first two principal components (PCs). The first PC aligns to the two opposite directions in the joint angle acceleration space which agree with the hand movement directions corresponding to the peaks of the bimodal PD distribution. The effects of neurons on the output dynamics concentrate around these directions. Connections between neurons with similar output effects tend to be strongly excitatory. Taken together, the core architecture of the recurrent network is characterized by two clusters of neurons with strong excitatory connections in each cluster. Connections between the clusters are mostly inhibitory.
Discussion

The analysis shows that two mutually inhibiting clusters of excitatory connections underlie the control of the biomechanical arm model in a 2D reaching task by a recurrent network of spiking neurons. Since each of the two clusters is composed of neurons with similar output effect directions, which we have shown to be related to hand movement directions, the existence of the two clusters naturally explains the bimodality of the hand movement PD directions. This leads to the question whether similar structures are employed in the mammalian brain to control movements, which would be subject to future research.





Acknowledgements

This work was partially performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE) and received funding from the Helmholtz Association of German Research Centres. This research was partially funded by the NRW-network 'iBehave', grant number: NW21-049.


References

1. https://doi.org/10.1523/JNEUROSCI.10-07-02039.1990
2. https://doi.org/10.1523/JNEUROSCI.08-08-02913.1988
3.https://doi.org/10.1016/j.neuron.2012.10.041
4. https://doi.org/10.7554/eLife.88591.3
5. https://doi.org/10.5281/zenodo.4724125


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P030: Synaptic Plasticity Mechanisms and Dynamics in the Cerebellar Spiking Microcircuit
Sunday July 6, 2025 17:20 - 19:20 CEST
P030 Synaptic Plasticity Mechanisms and Dynamics in the Cerebellar Spiking Microcircuit

Abdul H Butt*1, Marialaura De Grazia1, Emiliano Buttarazzi1 , Dimitri Rodarie1, Claudia Casellato1, D' Angelo Egidio1
1Department of Brain and Behavioural Science, University of Pavia, Pavia, Italy

*Email: abdulhaleembutt85@gmail.com

Introduction

Short-term plasticity (STP) is crucial for regulating excitatory and inhibitory information flow in the cerebellar cortex by modulating synaptic efficiency over seconds to minutes, acting as a dynamic filter for information processing. It shapes synaptic activity, alongside long-term plasticity (LTP) that arises from sustained stimulation. Both firing rates and spike timing affect plasticity, with distinct mechanisms across brain regions. This study introduces the Tsodyks-Markram STP model in cerebellar circuits reconstructed with detailed structural properties, and aims to integrate STP and LTP to explore their combined effects on cerebellar dynamics [1, 2].


Methods
The canonical cerebellar circuit, reconstructed and simulated as a point neuron network using Brain Scaffold Builder (BSB) interfaced with NEST [3,4], has been enhanced by incorporating short-term plasticity (STP) [1,2]. This involved adjusting the utilization parametersU, u, x, τ_fac, and τ_recto ensure proper facilitation and depression. The synaptic models were tested in both in-vitro and awakecanonical models of the mouse olivocerebellar microcircuit, focusing on both baseline firing rates and responses under specific stimulation protocols inspired from Pavlovian paradigms a input at mossy fibers (mf) at 40 Hz within the time window [1000–1260] ms and an impulse on the climbing fibers originating from the inferior olive (IO) as a 500 Hz burst within the time interval [1250–1260] [5].

Results
The single-cell pipeline confirmed that facilitation and depression function as expected. At high-frequency stimulation, facilitation prevails at the Glomeruli-Granule (Glom-GrC) synapses (Fig. 1A). The same phenomenon was investigated throughout the canonical circuit for each connection.Mean firing rates of each population show (Fig.1B-C) that STP plays a crucial role in the modulation of neuronal activity within the cerebellar cortical model. Purkinje cells (PCs) exhibit increased firing rates with STP, suggesting facilitation enhances their excitability.Basket cells and Deep Cerebellar Nuclei neurons (DCN, both _P (projecting) and _I (inhibitory)) exhibit reduced firing rates when STP is present, indicating synaptic depression reduces activity over time. Also the Inferior Olive neurons (IO) show a significant increase in their mean firing rate of IO stimulus when introducing STP.

Discussion
The results show the significant impact of STP in terms of signal propagation. Future work will explore the combination of STP-LTP which operate on two diffeent time scales and their interactions in sensorimotor loops during motor learning protocols [6, 7]. Specifically, LTP plasticity rules have been introducedon synapses at parallel fibers to Purkinje cells and parallel fibers to Molecular Layer Interneurons [6], utilizing awake version of the canonical cerebellar circuit.





Figure 1. Figure 1 A) Single-synapse STP dynamics, B) Canonical circuit in-vitro and awake (with STP vs static), response at step-like mf stimulus. C) Raster plot of the awake circuit (with STP vs static) with mf-IO stimulus paradigm
Acknowledgements
·The European Union’s Horizon Europe Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project)

·“National Centre for HPC, Big Data and Quantum Computing” (Project CN00000013 PNRR MUR – M4C2 – Fund 1.4 - Call “National Centers” - law decree n. 3138 16 december 2021)
References
● https://doi.org/10.1073/pnas.94.2.719
● https://doi.org/10.1152/jn.00258.2001
● https://doi.org/10.1038/s42003-022-04213-y
● https://doi.org/10.5281/zenodo.7243999
● https://doi.org/10.1371/journal.pcbi.1011277
● https://doi.org/10.1109/TBME.2015.2485301
● https://doi.org/10.1371/journal.pcbi.1008265


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P031: Adaptive Cerebellar Networks in Sensorimotor Loops
Sunday July 6, 2025 17:20 - 19:20 CEST
P031 Adaptive Cerebellar Networks in Sensorimotor Loops

Emiliano Buttarazzi* ¹, Marialaura De Grazia¹, Margherita Premi³, Egidio D’Angelo¹ ², Alberto Antonietti³, Claudia Casellato¹ ²

¹ Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
² Digital Neuroscience Center, IRCSS Mondino Foundation, Pavia, Italy
³ Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy

*Email: emiliano.buttarazzi@unipv.it

Introduction


Humans can generate accurate and appropriate motor behavior under many different environmental conditions [1]. Robotics is compatible for modelling this behavior, controlled by embodied human-like brain networks, along with monitorable and adjustable parameters [2] [3] [4]. To link cellular-level phenomena to brain architecture, wedevelop an efficient neuro-inspired controller for sensorimotor learning and control, based on specific brain neural structures and dynamics, employinga tandem configuration of forward and inverse internal models represented by cerebellar networks. Possible fall-out issimulating pathological states of patients with cerebellar-related movement disorders and predicting outcomes of neuro-rehabilitative treatments.
Methods

The system (Fig.1A) is made of two main components, theBRAIN, represented as a system of spiking neural networks (SNNs), and theBODY, represented in Pybullet, with proper interfaces (MUSIC or NRP). The cerebellar SNNs (Fig.1B) [5],using population-specific EGLIF neuron model [6], present a structure with “agonist-antagonist” functional subsections and include proper tuned long-term plasticity rules, to achieve an adaptive and physiologically accurate cerebellar model. The computational software is the Brain Scaffold Builder (BSB) [7], interfacing with NEST.
Results

The Long-Term Plasticities (Depression and Potentiation - LTD and LTP) have been implemented at: parallel fibers to Purkinje Cells and parallel fibers to Molecular Layer Interneurons. A Classical Eye Blinking Conditioning (CEBC) paradigm with 15 consecutive trials has been carried out to generate the learning curves in temporal association. Proper modulation of population firing rates along trials emerges (Fig.1C). The extension to an upper limb reaching task is under testing.
Discussion

Ongoing steps include the integration of short-term plasticity synaptic models with the long-term rules, and an optimal balance between forward and inverse cerebellar blocks, for stable and effective learning. Moreover, task complexity will be increased, simulating a reaching-grasping task with object-based action. Also, an integration of more physiological blocks (cortex and premotor cortex) is under development. Lastly, modification of the structural and functional parameters of the cerebellar modules to mimic cerebellar patients’ alterations is planned.





Figure 1. Figure 1: A) System block diagram, divided into BRAIN and BODY sections. B) Reconstruction of the cerebellar SNN, with the different populations ("plus" and "minus" only for differentiation between agonist and antagonist, respectively). C) Firing modulation driven by long-term plasticity rules, without (left) and with (right) complex spike (teaching) stimulus.
Acknowledgements
Work supported by:
·Horizon Europe Program for Research and Innovation under GA No. 101147319 (EBRAINS 2.0);
·The Italian Ministry of Research through PNRR projects funded by the EU, “Fit for Medical Robotics” (Project PNC0000007 MUR - “Fit4MedRob” - law decree prot. n. 0001984, 9 December 2022).
EB is a PhD student (National program) in AI, XXXIX cycle, Università Campus Bio-Medico di Roma.
References

1. https://doi.org/10.1016/S0893-6080(98)00066-5
2. https://doi.org/10.3389/fnbot.2021.634045
3. https://doi.org/10.1371/journal.pone.0112265
4. https://doi.org/10.1155/2019/4862157
5. https://doi.org/10.1073/pnas.1716489115
6. https://doi.org/10.3389/fncom.2019.00068
7. https://doi.org/10.1038/s42003-022-04213-y


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P032: Universal coarse-to-fine transition across the developing neocortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P032 Universal coarse-to-fine transition across the developing neocortex

Lorenzo Butti*1, Deyue Kong1, Nathaniel Powell2, Bettina Hein1, Jonas Elpelt1, Haleigh Mulholland2, Gordon Smith2, Matthias Kaschube1

1FIAS (Frankfurt Institute for Advanced Studies), Frankfurt am Main, DE

2Department of Neuroscience, University of Minnesota, Minneapolis, USA


*Email: butti@fias.uni-frankfurt.de


Introduction

How cortical representations emerge in development is an unresolved problem in neuroscience. Recent work in ferret shows that during early development, spontaneous activity exhibits a modular organization that is highly similar across diverse cortical areas, from sensory cortices to higher-order association areas [1]. Moreover, this modular organization persists in all areas after eye and ear-canal opening (approx. postnatal day 30), but the organization also changes, suggesting a considerable refinement over development, part of which may be area-specific [2]. It is currently unclear how this refinement unfolds on the level of local neural circuits and what mechanisms might guide this maturation.


Methods
We examine the development of network organization across diverse cortical regions (V1, A1, S1, PPC, PFC ), before (P21-24), around (P27-32) and after (P39-43) eye opening using both widefield and 2-photonin vivocalcium imaging of spontaneous activity in the ferret.

To gain mechanistic insight, we employ Local Excitation/Lateral Inhibition (LELI) network models, following [3]. These models can both reproduce the modular structure of early cortical activity and account for the ability of developing cortical circuits to transform unstructured input into modular output.
Results

We find that in both sensory and association areas, networks exhibit a highly similar pattern of changes over development: spontaneous activity is initially highly modular, i.e. strongly correlated and low-dimensional in local populations, becoming less correlated and higher-dimensional with age.
These in vivo changes can be explained by a developmental increase in lateral inhibition strength in a LELI model. This allows feedforward inputs to engage a larger number of network states, consistent with the transition of cortical networks to external sensory activity during this period. Moreover, the increase in inhibition predicts a decrease in modular wavelength over this same developmental time, which we confirm in our experimental data.
Discussion
Our findings indicate that the spontaneous activity in ferret cortex undergoes a developmental reorganization from coarser to finer-scaled organization, accompanied by a transition to more high-dimensional activity in both sensory and association areas. We propose that an increase in lateral inhibition serves as a common mechanism underlying cortical network refinement, and that this maturation leads to the expansion of representational capacity throughout the developing cortex.












Acknowledgements
We also thank the members of the Kaschube lab and the Smith lab for the useful discussions.


References
[1] N Powell, B Hein, D Kong, J Elpelt, HN Mulholland, M Kaschube, GB Smith. (2024).https://doi.org/10.1073/pnas.2313743121

[2]N Powell, B Hein, D Kong, J Elpelt, HN Mulholland, R Holland, M Kaschube, GB Smith.(2025).https://doi.org/10.1093/cercor/bhaf007
[3]HN Mulholland, M Kaschube, GB Smith.(2024).https://doi.org/10.1038/s41467-024-48341-x


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P033: Modeling Corticospinal Input Effects on V1-FoxP2 Interneurons in a Go Task Using NetPyNE
Sunday July 6, 2025 17:20 - 19:20 CEST
P033 Modeling Corticospinal Input Effects on V1-FoxP2 Interneurons in a Go Task Using NetPyNE

Andres F. Cadena Parra1*, Michelle Sanchez Rivera3, Constantinos Eleftheriou3, Roman Baravalle2, Ian Duguid3, Salvador Dura-Bernal2,4
1Department of Biomedical Engineering. Universidad de los Andes, Bogota, Colombia
2Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, USA
3Centre for Discovery Brain Sciences & Simons Initiative for the Developing Brain, University of Edinburgh, Edinburgh, UK.
4Center for Biomedical Imaging & Neuromodulation, The Nathan Kline Institute for Psychiatric Research.


*Email: af.cadena@uniandes.edu.co
Introduction

The execution of movement involves a complex interplay of neural structures whose activity results in precise motor output. The motor cortex generates commands for voluntary movement, conveyed via corticospinal neurons (CSNs) to the spinal cord, where interneurons (INs) integrate sensory feedback and motor commands to fine-tune motor neuron activity [1]. Among these V1 INs, and particularly those expressing FoxP2, are crucial for inhibitory feedback in motor control [2]. Building on recent findings that a subset of CSNs exhibit decreased firing rates during movement, this study aims to investigate how these CSN inputs influence V1-FoxP2 interneurons, shedding light on spinal integration mechanisms for coordinated motor output.
Methods
An in silico model was developed to study the effects of convergent increased/decreased corticospinal input on V1-FoxP2 INs. A single-cell model was implemented in NetPyNE/NEURON, incorporating Na⁺, K⁺, Ca²⁺ channels and AMPA dynamics. The cell’s morphology had a soma and four dendritic sections. Calibration used in vitro current-clamp data and optimization. Spike trains from a “Go task” from two different CSN subpopulations were connected to the V1-FoxP2 model, with simulated background activity consistent with in vivo observations [3]. Three conditions were tested: (1) increasing/decreasing input, (2) increasing/sustained input, and (3) increasing input only. Electrophysiological properties, like input resistance, were recorded over time.
Results
The model simulated in vivo V1-FoxP2 dynamics, where corticospinal input initially drives an increase in firing rate that peaks at movement onset, followed by a return to baseline. The in vivo condition optimizes the input-output relationship, exhibiting a high signal-to-noise ratio (SNR) post-movement and enabling a quicker return to baseline excitability. Additionally, background activity enhances the return to baseline of the V1-FoxP2 firing rate and input resistance. Notably, input resistance decreased progressively across time windows before, during, and after movement, making the neuron less susceptible to noise. The model further revealed that movement requires a specific ratio of increased and decreased CSN inputs.
Discussion
Our findings provide key insights into the neuronal computations that govern the integration of cortical inputs in the spinal cord. The model showed that in vivo-like corticospinal input enhances V1-FoxP2 activity time-locking to behavior without significantly reducing the SNR. This indicates a trade-off between temporal precision and firing rate strength to optimize motor control. Additionally, decreasing CSN input facilitates impedance recovery after movement, whereas in the sustained scenario, impedance fails to return to baseline. These results may inform future studies on the functional architecture of spinal circuits involved in motor control and rehabilitation, particularly in disorders affecting motor coordination.



Acknowledgements
This work was supported by the NIBIB U24EB028998 and NYS DOH01-C32250GG-3450000 grants. AFCP was supported by Universidad de los Andes through a Teaching Assistantship.
References
[1] Deska-Gauthier, D., & Zhang, Y. (2019). Functional diversity of spinal interneurons and locomotor control. Curr. Opin. Physiol., 8, 99–108. https://doi.org/10.1016/j.cophys.2019.01.005
[2] Bikoff, J. B., Gabitto, M. I., Rivard, A. F., et al. (2016). Spinal inhibitory interneuron diversity delineates motor microcircuits. Cell, 165, 207–219. https://doi.org/10.1016/j.cell.2016.01.027
[3] Schiemann, J., Puggioni, P., Dacre, J., et al. (2015). Behavioral state-dependent modulation of motor cortex output. Cell Rep., 11, 1319–1330.https://doi.org/10.1016/j.celrep.2015.04.042
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P034: Role of Synaptic Plasticity in the Emergence of Temporal Complexity in a Izhikevich Spiking Neural Network
Sunday July 6, 2025 17:20 - 19:20 CEST
P034 Role of Synaptic Plasticity in the Emergence of Temporal Complexity in a Izhikevich Spiking Neural Network

Marco Cafiso*1,2, Paolo Paradisi2,3

1Department of Physics 'E. Fermi', University of Pisa, Largo Bruno Pontecorvo 3, I-56127, Pisa, Italy
2Institute of Information Science and Technologies ‘A. Faedo’, ISTI-CNR, Via G. Moruzzi 1, I-56124, Pisa, Italy
3BCAM-Basque Center for Applied Mathematics, Alameda de Mazarredo 14, E-48009, Bilbao, BASQUE COUNTRY, Spain

*Email: marco.cafiso@phd.unipi.it
Introduction

Neural avalanches exemplify intermittent behavior in brain dynamics through large-scale regional interactions and are crucial elements of brain dynamical behaviors. Originally introduced in the Self-Organized Criticality framework, these intermittent complex behaviors can also be examined through Temporal Complexity (TC) theory. Computational neural network models have become central in the neuroscience field. Izhikevich’s neuron model [1] provides a powerful yet simple framework for simulating networks with over 20 brain-like dynamic patterns, enabling studies of normal and pathological conditions. Our work analyzes the temporal complexity of neural avalanches and coincidence events in an Izhikevich Spiking Neural Network, comparing systems with and without Spike-Time Dependent Plasticity (STDP) [2] processes.
Methods
A network of 1,000 Izhikevich neurons with an excitatory-to-inhibitory ratio of 4:1 was developed, designing inhibitory synaptic connections to exert a stronger influence than their excitatory counterparts, reflecting physiological neural circuit dynamics. We subjected the network to six distinct input signals, including two containing complex events. We then measured and compared the temporal complexity of network responses both with and without STDP plasticity mechanisms activated. Our temporal complexity assessment methodology leverages neural avalanche and coincidence events to estimate multiple scaling indices [3]. These metrics provide quantitative measures of the system’s complexity.
Results
The analysis of scaling indices related to temporal complexity reveals variations in complexity within neural avalanches and coincidences in simulations that incorporate the STDP plasticity rule, compared to those where it is absent. Furthermore, the extent of the change in temporal complexity depends on the simulation’s input signal. Specifically, strong and continuous signals lead to a substantial change in temporal complexity when the STDP rule is present, whereas intermittent signals exhibit smaller variations in complexity due to STDP.
Discussion
These preliminary results on the complexity behaviors of a spiking neural network with or without the STDP plasticity rule highlight how topological changes in the network configuration, due to time-dependent plasticity rules, lead to changes in temporal complexity behaviors. These results suggest that neural plasticity, defined as changes in the network’s spatial configuration, can influence the temporal complexity levels of a neuronal network, providing insights into the dynamic interplay between structural adaptation and the emergence of temporal complex behaviors in spiking neural networks.



Acknowledgements
This work was supported by the Next-Generation-EU programme under the funding schemes PNRR-PE-AI scheme (M4C2, investment 1.3, line on AI) FAIR “Future Artificial Intelligence Research”, grant id PE00000013, Spoke-8: Pervasive AI.
References
[1] Eugene M. Izhikevich. “Simple model of spiking neurons”. In: IEEE Transactions on Neural Networks 14.6 (2003), pp. 1569–1572.
[2] Natalia Caporale and Yang Dan. “Spike Timing–Dependent Plasticity: A Hebbian Learning Rule”. In: Annual Review of Neuroscience 31.Volume 31, 2008 (2008), pp. 25–46.
[3] P. Paradisi and P. Allegrini. “Intermittency-Driven Complexity in Signal Processing”. In: Complexity and Nonlinearity in Cardiovascular Signals. Ed. by Riccardo Barbieri, Enzo Pasquale Scilingo, and Gaetano Valenza. Cham: Springer, 2017, pp. 161–195.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P035: The geometry of primary visual cortex representations is dynamically adapted to task performance
Sunday July 6, 2025 17:20 - 19:20 CEST
P035 The geometry of primary visual cortex representations is dynamically adapted to task performance

Leyla Roksan Caglar*1,Julien Corbo*2, O.Batuhan Erkat2,3, Pierre-Olivier Polack2

1Windreich Department of AI & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
2Center for Molecular and Behavioral Neuroscience, Rutgers University—Newark, Newark, NJ, USA
3Graduate Program in Neuroscience, Rutgers University—Newark, Newark, NJ, USA

*Contributed equally; Email: l.r.caglar@gmail.com; julien.corbo@gmail.com
Introduction

Perceptual learning optimizes perception by reshaping sensory representations to enhance discrimination and generalization. Although these mechanisms’ implementation remains elusive, recent advances suggest that the neural geometry of the representations is key, by preparing population activity to be read out at the next processing stage. Our previous work has shown that learning a visual discrimination task reshapes the population feature representations in the primary visual cortex (V1) via suppressive mechanisms, effectively discretizing the representational space, and favoring categorization and generalization [1]. However, it is unclear how these changes impact the discriminability of the representations when being read out and transformed into a decision variable.
Methods
Recent findings under the Manifold Capacity Theory [2]suggest that learning enhances classification capacity by altering the geometric properties of population activity, increasing the linear separability of stimulus representations. To test this, we examined the relationship between V1 feature representation, neural manifold geometry, and behavioral discrimination, hypothesizing that the previously observed discretization would enhance classification capacity and alter manifold geometry as early as V1. Using calcium imaging, we compared V1 activity between trained and naïve mice performing an orientation discrimination Go/NoGo task at varying difficulty levels.
Results
Investigating response dimensionality, we found it increased as the Go/NoGo stimuli became more similar in both trained and naïve mice. As predicted, dimensionality was lower in trained animals, suggesting the task's biological implementation relies on reducing representational dimensionality. However, dimensionality alone did not fully explain performance variability. Instead, we found that the linear separability of representations in their embedding space was a stronger predictor of individual behavioral performance. This separability of manifolds was further evidenced by measuring the neural manifold’s capacity and their geometric properties (manifold dimension and manifold radius), which all show a decrease with successful behavioral performance in the trained mice, but show no change in the naive mice.
Discussion
Taken together, our results show a clear relationship between behavioral task performance, representational dimensionality, and manifold separability in the early visual cortex of mice. Across all computational measures, we demonstrated an inverse relationship between dimensionality and successful perceptual discrimination assisted by representational separability.These results confirm that learning alters the geometric properties of early sensory representations as early as in V1, optimizing them for linear readout and improving perceptual decision-making.




Acknowledgements
The authors are grateful to the members of the Polack lab for the helpful conversations. This work was funded by The Whitehall Foundation (grant 2015-08-69). The Charles and Johanna Busch Biomedical Grant Program The National Institutes of Health National Eye Institute: Grant #R01 EY030860 Brain initiative: Grant #R01 NS120289) Fyssen Foundation postdoctoral fellowship.
References
[1] Corbo, J., Erkat, O. B., McClure, J., Khdour, H., & Polack, P.-O. (2025). Discretized representations in V1 predict suboptimal orientation discrimination.Nature Communications,16(1), 41. https://doi.org/10.1038/s41467-024-55409-1
[2]Chung, S., Lee, D. D., & Sompolinsky, H. (2016). Linear readout of object manifolds.Physical Review E,93(6), 060301. https://doi.org/10.1103/PhysRevE.93.060301

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P036: Dendrites competing for weight updates facilitate efficient familiarity detection
Sunday July 6, 2025 17:20 - 19:20 CEST
P036 Dendrites competing for weight updates facilitate efficient familiarity detection

Fangxu Cai1,Marcus K. Benna*2


1Department of Physics, UC San Diego, La Jolla, USA


2Department of Neurobiology, UC San Diego, La Jolla, USA


*Email: mbenna@ucsd.edu
Introduction

The dendritic tree of a neuron plays an important role in the nonlinear processing of incoming signals. Previous studies [1-3] have suggested that during learning, selecting only a few dendrites to update their weights can enhance the memory capacity of a neuron by reducing interference between memories. Building on this, we examine two strategies for selecting dendrites: one with and one without interaction between dendrites. The interaction between dendrites serves to reduce variability in the number of dendrites updated, potentially arising from competition and the allocation of resources necessary for long-term synaptic plasticity.

Methods
We study a model with parallel dendrites, each performing nonlinear processing and connected in parallel to the soma, which sums their contributions [4]. The selection of dendrites to update is based on their activation level — the overlap between their weight and input vectors. Under the non-interacting rule, a dendrite is selected if its activation exceeds a specific threshold; under the interacting rule, only the top n dendrites with the highest activations are chosen. We compare these two learning rules using an online familiarity detection task [1]. In this task, input patterns are streamed sequentially to the neuron, which is required to produce a high response to previously presented inputs while maintaining a low response to unfamiliar ones.

Results
We observe that the interacting learning rule achieves a significantly higher memory capacity than the non-interacting rule by 1) limiting the variance of the memory response, and 2) decorrelating synaptic weights when input signals are correlated across dendrites. With the interacting rule, the best achievable memory capacity increases as n decreases, reaching its maximum at n = 1. In contrast, this is not the case for the non-interacting rule, where the capacity declines when too few dendrites are updated. We further find that even when inputs are maximally correlated (all dendrites receive identical input), the interacting rule maintains a capacity comparable to the uncorrelated input scenario.
Discussion
Our findings show that an n-winners-take-all type interaction among dendrites to determine their eligibility for long-term plasticity can better leverage dendritic nonlinearities for optimizing memory capacity, especially when inputs are correlated among dendrites. While biological neurons may not strictly select a fixed number of dendrites to store each input, our model suggests that reducing the variability in the number of updated dendrites through competition between them can still improve the capacity. Furthermore, our results are robust to variations in model specifics, such as the choice of dendritic activation functions and the presence of input noise, underscoring the generality of the proposed mechanism.




Acknowledgements
M.K.B was supported by R01NS125298 (NINDS) and the Kavli Institute for Brain and Mind.

References
1. https://doi.org/10.1371/journal.pcbi.1006892
2. https://doi.org/10.1523/JNEUROSCI.5684-10.2011
3. https://doi.org/10.1038/nature14251
4. https://doi.org/10.1109/JPROC.2014.2312671

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P037: Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuronal data.
Sunday July 6, 2025 17:20 - 19:20 CEST
P037 Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuronal data.

Pedro V. Carelli,*1, Felipe Serafim1, Mauro Copelli1

1Departmento de Física, Universidade Federal de Pernambuco, Recife, Brasil

*Email: pedro.carelli@ufpe.br


Introduction
An important working hypothesis to investigate brain activity is whether it operates in a critical regime[1,2]. Recently, maximum-entropy phenomenological models have emerged as an alternative way of identifying critical behavior in neuronal data sets[3]. In the present work, we investigate the signatures of criticality from a firing rate-based maximum-entropy approach on data sets generated by computational models, and we compare them to experimental results.
Methods
We simulate critical and noncritical spiking neuronal models [4] and generate spiking time series. Then, following Mora et al [3], a Boltzmann-like distribution is defined. We consider as observable the firing rates Kt, and constrain the probability distribution in two different times, Pu(Kt,Kt+u), obtaining the energy function. We then solve an inverse problem to fit the model parameters to the data statistics. Once the model is adjusted to describe the data, we can perform statistical physics analysis, and the signatures of criticality are obtained from the divergence of the model's generalized specific heat.
Results
We found that the maximum entropy approach consistently identifies critical behavior around the phase transition in models and rules out criticality in models without phase transition. The maximum-entropy-model results are compatible with results for cortical data from urethane-anesthetized rats [4] and human MEG.
Discussion

We detect signatures of criticality in different brain data sets by employing a maximum entropy approach based on neuronal population firing rates. This method diverges from conventional techniques that depend on estimating critical exponents through power law distributions of neuronal avalanche sizes and durations. It proves especially useful in scenarios where traditional markers of criticality derived from neuronal avalanches are either methodologically unreliable or yield ambiguous results. Our results providefurther support for criticality in the brain.



Acknowledgements
We thankfully acknowledge the funding from CNPq, FACEPE, CAPES and FINEP.


References
1. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. Journal of neuroscience. 2003 3;23(35):11167-77.
2. FONTENELE, A. J. et al. Criticality between cortical states. PHYSICAL REVIEW LETTERS, v. 122, p. 208101, 2019.
3. Mora T, Deny S, Marre O. Dynamical criticality in the collective activity of a population of retinal neurons. Physical review letters. 2015 Feb 20;114(7):078105.
4. SERAFIM, F. et al. Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuron data. Phys Rev E, v. 110, p. 024401, 2024.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P038: Mimicking Ripple- and Spindle-Like Dynamics in an Amplitude and Velocity-Feedback Oscillator
Sunday July 6, 2025 17:20 - 19:20 CEST
P038 Mimicking Ripple- and Spindle-Like Dynamics in an Amplitude and Velocity-Feedback Oscillator

Pedro Carvalho*1, Wolf Singer1, Felix Effenberger1


1Ernst Strüngmann Institute, Singer Lab, Frankfurt am Main/Hessen, Germany
*Email: prfdecarvalho@gmail.com


Introduction :Ripples and spindles play a fundamental role in learning, memory, and sleep [1]. Yet, the principles of their generation and their functional relevance remain to be fully understood. Here, we show how damped harmonic oscillators (DHOs) subject to feedback can reproduce such characteristic dynamics on the population level (Fig. 1B,C). In our model, one DHO represents the aggregate activity of a recurrently coupled E-I population of spiking neurons [3] and can capture different characteristics of the underlying E-I circuit (e.g. recurrent excitation and inhibition) by feedback connections [2]. Recurrent networks of such nodes were previously shown to reproduce many physiological phenomena [2].



Methods:Using an analytically derived bifurcation diagram (see [2]), we investigate the dynamics of a DHO with feedback along different points in the 2d parameter space (W, b) of the velocity feedback parameter w and the amplitude of a harmonic input b (Fig. 1A). We determine dynamics for different parameter paths (colored lines in Fig. 1A) by performing numerical simulations of the DHO dynamics subject to a harmonic drive. We observe nodal dynamics similar to ripples and spindles (Fig.1B,C) [1].

Results:We show that for a DHO with velocity feedback, the interplay between input frequency (data not shown), the oscillator’s natural frequency, and the trajectory of input parameters in the (b, W) parameter subspace (Fig. 1A) gives rise to dynamics resembling spindles and ripples (Fig. 1B,C). Notably, for each class of these characteristic dynamics, we can identify a specific parameter path in the (b, W) parameter subspace resulting in their generation (Fig. 1A, colored lines). These dynamics are due to a dynamic bifurcation, in which the system transitions between subcritical and supercritical regimes separated by a Hopf bifurcation. In this configuration, ripple- and spindle-like dynamics emerge as a transient phenomenon.
Discussion:By studying the dynamics of DHOs subject to velocity feedback, we show that these oscillators can reproduce ripple- and spindle-like dynamics [1] in an intriguingly simple phenomenological model of the aggregate activity of E-I populations [2,3]. These complex dynamics are shown to result from input-driven dynamic bifurcations of the underlying DHO system. This provides a reductionistic model of ripple and spindle initiation in which simple mechanisms result in complex dynamics (see also [3]). We hope that this model will allow for a better understanding the mechanisms of spindle and ripple initiation, as well as to allow for assessing their role in information processing and consolidation (compare [2]), a topic left for a future study





Figure 1. Ripples and spindles produced by a velocity feedback DHO. A) Bifurcation diagram in the input amplitude (b) and velocity feedback (W) parameter subspace. Blue: stable focus, orange: limit cycles, green and red line parameter paths producing spindles and ripples. B) Reproduction of data from [2]. C) Simulation of ripple and spindle-like dynamics. Colors match parameters paths in (A).
Acknowledgements
-
References
[1] Staresina, B.P. et al. How coupled slow oscillations, spindles and ripples coordinate
neuronal processing and communication during human sleep. Nat. Neurosci. (2023).


[2]Spyropoulos, G.et al.Spontaneous variability in gamma dynamics described by a damped harmonic oscillator driven by noise.Nat Commun13, 2019 (2022)


[3] F. Effenberger, P. Carvalho, I. Dubinin, & W. Singer, The functional role of oscillatory
dynamics in neocortical circuits: A computational perspective, Proc. Natl. Acad. Sci.(2025).



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P039: In Silico Safety and Performance Assessment of Vagus Nerve Stimulation with Metamodeling-Based Uncertainty/Variability Quantification
Sunday July 6, 2025 17:20 - 19:20 CEST
P039 In Silico Safety and Performance Assessment of Vagus Nerve Stimulation with Metamodeling-Based Uncertainty/Variability Quantification

Antonino M. Cassara'*1, Javier Garcia Ordonez1, Werner Van Geit1, Esra Neufeld1

1Foundation for Research on Information Technologies in Society (IT’IS), Zurich, Switzerland

*Email: cassara@itis.swiss

Introduction

Safety and efficacy assessments of medical devices are key to regulatory submissions. We established anin silicopipeline for neural interface assessment and demonstrated it for a vagus nerve stimulation (VNS) cuff electrode. It combines histology-based electromagnetic, electrophysiology, and thermal simulations, as well as tissue damage predictors, with high throughput screening of data from the NIH SPARC program [1], and systematic uncertainty quantification to assess safety and shed light on primary concerns, dominant factors, variability, and model limitations. This study serves to guide the development and application of regulatory-gradein silicomethodologies for safer, more effective medical technologies.



Methods
Evaluated quantities-of-interest (QoIs) included iso-percentiles of dosimetric exposure quantities, current intensities and densities, charge injection, off-target stimulation predictors, tissue heating, as well as tissue damage predictors – all as a function of varying degrees of fiber recruitment. The pipeline is implemented on the o2S2PARC platform for open and FAIR computational modeling [2] using modeling functionalities from Sim4Life [3]. Variability was quantified through iteration over histological samples from different subjects, multiple sources of numerical uncertainty were quantified, and model parameter uncertainties (e.g., tissue properties, fiber statistics) were propagated using advanced surrogate modeling methodologies.

Results

E-field thresholds were compared to safety guidelines [4], temperature increases to FDA limits [5], and the commonly applied (though questionably relevant) Shannon’s Criteria [6] was evaluated. The surrogate model-based uncertainty propagation permitted to shed light on complex correlations between model parameters and QoIs, fully accounting for non-linear dependencies and multi-factor interactions, and revealing novel mechanistic insights.
Discussion
The fully automatized pipeline enables quantitative safety assessment for a wide variety neural interfaces for bioelectronic medicine. It supports electrode design optimization towards improved safety and effectivity, and the identification of safe therapeutic windows. The systematic uncertainty analysis using advanced surrogate-model-based techniques illustrates the value of o2S2PARC intelligent metamodeling framework and scalable cloud resources for exploring large parameter spaces. In conclusion, carefully executed, regulatory-gradein silicosafety assessment is a powerful tool for accelerating medical device innovation.

Figure 1. Figure 1. (a) Safety assessment pipeline on o2S2PARC; (b) histology-based nerve model-generation and population with electrophysiological fiber models; (c) visualization of selected dosimetric and thermal distributions; (d) cross sections through the surrogate models with associated interpolation uncertainty; uncertainty propagation of EM and thermal tissue properties through QoI surrogate models.
Acknowledgements
“This research is supported by the NIH Common Fund’s SPARC program under award3OT3OD025348-01S8”
References
[1] NIH SPARC program, USA.https://commonfund.nih.gov/sparc
[2] Neufeld E. et al. 2020. SPARC’s Open Online Simulation Platform: o2S2PARC. FASEB J 34(S1).
[3] Sim4Life, ZMT Zurich MedTech AG, Zurich, Switzerland.
[4] ICNIRP. 2010. Guidelines for exposure to time-varying EM fields (1 Hz–100 kHz). Health Phys 99(6):818-36.
[5] FDA guidance on thermal effects:https://www.fda.gov.
[6] Shannon RV. 1992. Safe levels for electrical stimulation. IEEE Trans Biomed Eng 39(4):424-6.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P040: Modeling Transcranial Magnetic Stimulation: From Exposure in Personalized Head Models to Single Cell and Brain Network Dynamics Responses
Sunday July 6, 2025 17:20 - 19:20 CEST
P040 Modeling Transcranial Magnetic Stimulation: From Exposure in Personalized Head Models to Single Cell and Brain Network Dynamics Responses

Antonino M. Cassara*1, Serena Santanche2, Micol Colella2, Chiara Billi1, Micaela Liberti2, Esra Neufeld1

1Foundation for Research on Information Technologies in Society (IT'IS), Zurich, Switzerland
2Universita La Sapienza, Rome, Italy

*Email: cassara@itis.swiss
Introduction

To facilitate the design and interpretation of human studies involving Transcranial Magnetic Stimulation (TMS), we developed a cloud-based and web-accessible computational framework that enables the execution of subject-specific virtual TMS experiments towards assessing and optimizing safety and efficacy. It also facilitates the formulation of testable hypotheses regarding stimulation mechanisms across various temporal and spatial scales, including mechanisms by which induced electric fields (E-fields) interact with individual neurons and high level brain network dynamics.

Methods
The framework extends a previously established pipeline [1] for non-invasive brain stimulation modeling, which combined image-based generation of detailed head models (personalized anatomy and tissue properties), personalized electromagnetic simulations (exposure and lead-fields for virtual EEG), image-based brain network model construction (mean field), and dynamic functional connectivity assessment. The current work extends this pipeline with a) TMS coil modeling and positioning, b) neuron polarization and stimulation probability mapping based on statistical sub- and supra-threshold responses of morphologically-detailed cortical neuron populations, and c) derived coupling terms for the assessment of TMS impact on network dynamics.

Results
The pipeline was employed to investigate TMS stimulation mechanisms at the single-cell and the network dynamics level. Key findings include: the dielectric contrast between gray and white matter is insufficient to directly induce spiking; mapping functions for population- and orientation-dependent threshold E-fields probabilities have been established for various pulse shapes (Figure 1), offering insights into the stimulability of different neuronal populations; electrophysiology-based activation maps have been generated for simplified models of commercial TMS coils under relevant stimulation conditions. Model validation is ongoing.

Discussion
Our pipeline extends prior modeling work [1-3] to provide a customizable framework for investigating TMS mechanisms and designing virtual clinical trials. Probability maps link dosimetric exposure predictions with electrophysiological responses that in turn modulate brain network dynamics. The pipeline serves to shed light on interaction mechanisms on to help design superior stimulation paradigms, tuned towards optimization electrophysiological response, with improve selectivity, effectivity, and safety.




Figure 1. Figure 1. (a) Illustration of the segmented head model, with 40 tissues; (b) example of user-defined TMS coils; (c) final model, featuring the optimally placed TMS coil; (d) neuronal population, orientation and pulse-specific threshold E-fields; (d) spiking threshold maps for several neuronal populations.
Acknowledgements
“This research is supported by the NIH Common Fund’s SPARC program under award3OT3OD025348-01S8”
References
[1] Karimi, F., et al. (2025). Precision non-invasive brain stimulation: an in silico pipeline for personalized control of brain dynamics. J. Neural Eng., 10.1088/1741-2552/adb88f.
[2] Aberra, A.S., et al. (2020). Simulation of TMS in head model with morphologically-realistic cortical neurons. Brain Stimul., 13(1):175-189.
[3] Jansen, B.H., Rit, V.G. (1995). EEG and VEP generation in a model of coupled cortical columns. Biol. Cybern., 73:357–366.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P041: Balanced inhibition allows for robust learning of input-output associations in feedforward networks with Hebbian plasticity
Sunday July 6, 2025 17:20 - 19:20 CEST
P041 Balanced inhibition allows for robust learning of input-output associations in feedforward networks with Hebbian plasticity

Gloria Cecchini*1, Alex Roxin1

1Centre de Recerca Matemàtica, Barcelona, Spain

*Email: gcecchini@crm.cat

Introduction

In neural networks, post-synaptic activity depends on multiple pre-synaptic inputs. Hebbian plasticity allows sensory inputs to be associated with internal states, as seen in the CA1 region of the hippocampus. By modifying synaptic weights, Hebbian rules enable sensory inputs to elicit correlated outputs, allowing for efficient memory storage. When input and output patterns are uncorrelated, numerous associations can be encoded. However, if output patterns weakly correlate with input patterns, Hebbian learning reinforces shared synapses across patterns, leading to reduced network flexibility and impaired associative learning.


Methods
We analyzed the effects of Hebbian plasticity in a feedforward network model where input-output correlations emerge due to intrinsic connectivity. Using numerical simulations, we examined how weak correlations between inputs and outputs shape synaptic weight dynamics over time. We then introduced a balanced inhibition mechanism, inspired by in-vivo cortical circuits [1], to assess its impact on synaptic weight distribution and the network’s ability to store diverse associations. Network performance was evaluated by measuring output pattern variability.


Results
Our results show that when weak correlations exist between input and output patterns, Hebbian learning selectively strengthens synapses shared across patterns. This reinforcement leads to a rigid network state, where outputs become highly correlated over time. Consequently, the network loses the ability to store multiple distinct associations, significantly reducing its learning capacity. However, introducing balanced inhibition prevents the over-strengthening of shared synapses, allowing output patterns to remain distinct and ensuring a more flexible associative learning process.


Discussion
These findings highlight a fundamental limitation of Hebbian learning in feedforward networks when input-output correlations exist. Without a regulatory mechanism, the network structure becomes overly rigid, preventing effective storage of new associations. Balanced inhibition emerges as a simple yet effective strategy to mitigate this issue, preserving learning flexibility by counteracting correlation-driven synaptic reinforcement. Our study underscores the critical role of inhibition in biological neural circuits, offering insights into how the brain maintains efficient and adaptive information processing.




Acknowledgements
This project has received funding from Proyectos De Generación De Conocimiento 2021 (PID2021-124702OB-I00). This work is supported by the Spanish State Research Agency, through the Severo Ochoa and Maria de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M). We thank CERCA Programme/Generalitat de Catalunya for institutional support.
References
1. Bilal Haider, Alvaro Duque, Andrea R. Hasenstaub, David A. McCormick (2006) Neocortical Network Activity In Vivo Is Generated through a Dynamic Balance of Excitation and Inhibition. Journal of Neuroscience, 26 (17) 4535-4545; https://10.1523/JNEUROSCI.5297-05.2006
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P042: Feedforward and Feedback Inhibition Flexibly Modulates Theta-Gamma Cross-Frequency Interactions in Neural Circuits
Sunday July 6, 2025 17:20 - 19:20 CEST
P042 Feedforward and Feedback Inhibition Flexibly Modulates Theta-Gamma Cross-Frequency Interactions in Neural Circuits

Dimitrios Chalkiadakis*1,2, Jaime Sánchez-Claros1, Víctor J López-Madrona3, Santiago Canals2, Claudio R. Mirasso1

1Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC),Consejo Superior de Investigaciones Científicas (CSIC) - Universitat de les Illes Baleares(UIB), Palma de Mallorca, Spain
2Instituto de Neurociencias, Consejo Superior de Investigaciones Científicas (CSIC) - Universidad Miguel Hernández (UMH), Sant Joan d’Alacant, Spain
3Institut de Neurosciences des Systèmes, Aix Marseille Univ -Inserm,Marseille, France

*Email:dimitrios@ifisc.uib-csic.es
Introduction
Brain rhythms are essential for coordinating neuronal activity. Cross-frequency coupling (CFC), particularly between theta (~8 Hz) and gamma (~30 Hz) rhythms, is critical for memory formation [1]. Traditionally, CFC was attributed to slow oscillations modulating faster activity at specific phases. However, metrics such as Cross-Frequency Directionality (CFD) have revealed bidirectional interactions, with both slow-to-fast and fast-to-slow influences [1, 2]. Here, we introduce a computational circuit model that flexibly exhibits both directionality interactions based on the balance of inhibitory feedforward and feedback motifs. Our framework is supported by electrophysiology measurements in the rat’s hippocampus.


MethodsWe analyzed two motifs based on variations of the (Pyramidal) Interneuron Network Gamma (PING/ING) models, both of which generate gamma rhythms through interactions between pyramidal cells (PCs) and inhibitory basket cells (BCs). An external theta drive modulatedthe network’s activity, inducing cross-frequency interactions (see Fig. 1). Somatic transmembrane currents were computed for cross-frequency dynamics analysis.Our model was validated using the experimental dataset presented in [1], which includes a detailed analysis of pathway-specific field potentials reflecting the activity of Entorhinal Cortex layer III (ECIII) projections to the hippocampal CA1 area in rats navigating both familiar and novel environments.


Results
Our analysis revealed that in θ-ING motifs, feedforward recruitment of BCs drives gamma-to-theta directionality (CFD<0), while in θ-PING motifs, feedback inhibition favors theta-to-gamma directionality (CFD>0, Fig. 1b-iii vs 1c-iii). In combined motifs, varying synaptic strengths within realistic ranges, we found smooth transitions between directionalities (Fig. 1d). Experimental data validated our framework, as behavioral conditions modulated CFD and gamma frequency in line with our model predictions (Fig. 1e). Finally, by evaluating each motif’s capacity to integrate distinct inputs impinging at different sites of the PC dendritic tree, we report their differential role in prioritizing transmission across different information channels.


Discussion
Our framework suggests that feedforward/feedback inhibitory balance regulates the directionality of theta-gamma interactions. Notably, θ-ING/θ-PING modes exist along a continuum rather than as distinct alternatives. In our model, CFD analysis identified transitions between functional modes, aligning with experimental observations across different behavioral states.
We further showed that a feedback-shifted balance promotes strong afferent-driven cross-frequency rhythmicity, while a feedforward-shifted motif broadens encoding windows,favoring parallel pathway transmission. Thus, dynamic CFD measures may reflect predominant inhibitory motifs and flexible prioritization of functional connectivity pathways.



Figure 1. Figure 1. (a) Motifs’ connections with dashed lines differentiating θ-ING (purple) from θ-PING (blue). (b, c) Cross-frequency interactions in θ-ING and θ-PING. (i) Transmembrane currents (gray) with PC/BC spikes in blue/orange. (ii) Cross-frequency coupling. (iii) CFD. (d) Mixed θ-ING/θ-PING motifs show CFD changes inversely to peak γ. (e) Experiments confirm the CFD–γ peak relationship of (d).
AcknowledgementsD. C., J. C. and C. M. acknowledge support from the Spanish Ministerio de Ciencia, Innovación y Universidades through projects PID2021-128158NB-C22 and María de Maeztu CEX2021-001164-M. D. C. and S. C. acknowledge support from the Spanish Ministerio de Ciencia, Innovación y Universidades through projects PID2021-128158NB-C21 and Severo Ochoa CEX2021-001165-S
References
[1] López-Madrona, V. J., Pérez-Montoyo, E., Álvarez-Salvado, E., Moratal, D., Herreras, O., Pereda, E., … Canals, S. (2020). Different Theta Frameworks Coexist in the Rat Hippocampus and Are Coordinated during Memory-Guided and Novelty Tasks.eLife,9, e57313. doi:10.7554/eLife.57313
[2] Jiang, H., Bahramisharif, A., Van Gerven, M. A. J., & Jensen, O. (2015). Measuring Directionality between Neuronal Oscillations of Different Frequencies.NeuroImage,118, 359–367. doi:10.1016/j.neuroimage.2015.05.044
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P043: LEC Ensemble Analysis Reveals Context-Guided Odor Encoding via Adaptive Spatio-Temporal Representations
Sunday July 6, 2025 17:20 - 19:20 CEST
P043 LEC Ensemble Analysis Reveals Context-Guided Odor Encoding via Adaptive Spatio-Temporal Representations

Yuxi Chen*1, Noga Mudrik*2, James J. Knierim1, Adam S. Charles2

1Department of Neuroscience, Mind/Brain Institute, Johns Hopkins University, Baltimore, USA
2Department of Biomedical Engineering, Kavli NDI, Center of Imaging Science, Johns Hopkins University, Baltimore, USA
*Email: ychen315@jhu.edu; nmudrik1@jhu.edu


Introduction.The lateral entorhinal cortex (LEC) has rich associative connections in the rat cortex, linking the hippocampus and neocortex. It encodes spatial context and develops odor selectivity [2,3]. A key question is how LEC enables context-odor integration. Using Neuropixels, we recorded LEC activity while rats performed an odor-context task that included identifying an odor and selecting the corresponding reward port, with the reward-port location switching between two contexts defined by the box the rat occupied (Fig. A). We further developed an ensemble-identification method to reveal how hidden LEC ensembles support context-odor integration via adaptive representation pre- vs. post-training.Methods.We recorded rats’ LEC both before and after learning odor-context associations (“day 1” vs. “day N”). Each day had 6 sessions, with the rat alternately placed in one of two boxes that provide context through unique cues (Fig. A). Spike sorting was done with Kilosort4, and firing rate (FR) was estimated via Gaussian convolution, producing a multi-label tensor dataset (Fig. B). We developed a graph-driven ensemble method that extends [4] to multi-class data, identifies state-dependent ensemble composition (A) adjustments (Fig. D), and captures per-trial temporal variability in ensemble traces (φ). We tested the ensembles’ encoding of odor vs. box via 5-fold cross-validation logistic regression.Results.Neurons vary by odor, box, or both (Fig. C), suggesting ensembles. We found ensembles with session adjustments (Fig. H), that temporally encode box (Fig. E). On day 1, ensemble traces differentiate boxes over time; while on day N, boxes are more separated at trial start. Session-trace averages under fixed box (Fig. F) show more variability on day 1 compared to day N, which featured consistency of same-box sessions. Odor encoding is less apparent, and, on day N, is primarily revealed under a fixed box (Fig. G). Odor prediction accuracy improved when conditioned on box (Fig. I), with a larger improvement on day N. Odor feature importance shows that conditioning on the box shifts encoding timing, with later points more important under fixed-box.Discussion.We identified session-adjusting ensembles that capture box encoding, with earlier encoding on day N. On day 1, rats show distinct representations for same-box sessions, suggesting session-by-session encoding, while on day N, consistent same-box session activations suggest box recognition. We hypothesize that post-training, rats first identify the box, which opens an 'odor-integration gate'. This aligns with improved odor-encoding accuracy and the shift in odor timing importance when conditioned on the box compared to marginalized. Our findings suggest that over training, the LEC develops a hierarchical mechanism for context-odor integration that starts with early context identification, followed by box-conditioned odor integration.



Figure 1. A: Experiment. B: Multi-class data across box-odor-sessions. C: Single neuron traces. D: Ensembles-adjusting approach leveraging [4]. E–G: Ensemble traces by box (E), session (F), and odor (G, left: marginalizing, right: conditioning on box). H: Two day-N ensembles adjusting by session. I: Odor prediction confusion matrices ± box conditioning. J: Ensemble/time point importance for odor encoding.
Acknowledgements

Y.C. and J.J.K. were fundedbyNIA grant P01 AG009973.N.M. was funded by the Kavli Foundation Neurodiscovery Award and as a Kavli Fellow of Johns Hopkins Kavli NDI. A.S.C was supported by NSF CAREER Award 2340338 and a Johns Hopkins Bridge Grant.
References

Bota, M., Sporns, O., & Swanson, L. W. (2015). Architecture of the cerebral cortical association connectome underlying cognition.PNAS.

Igarashi, K. M., et al. (2014). Coordination of entorhinal–hippocampal ensemble activity during associative learning.Nature.

Tsao, A., Sugar, J., Lu, L., Wang, C., Knierim, J. J., Moser, M. B., & Moser, E. I. (2018). Integrating time from experience in the lateral entorhinal cortex.Nature.

Mudrik, N., Mishne, G., & Charles, A. S. (2024). SiBBlInGS: Similarity-driven Building-Block Inference using Graphs across States. ICML.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P044: Uncertainty-Calibrated Network Initialization via Pretraining with Random Noise
Sunday July 6, 2025 17:20 - 19:20 CEST
P044 Uncertainty-Calibrated Network Initialization via Pretraining with Random Noise

Jeonghwan Cheon*1, Se-Bum Paik1

1Department of Brain and Cognitive Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

*Email: jeonghwan518@kaist.ac.kr


Uncertainty calibration — the ability to estimate predictive confidence that reflects the actual accuracy — is essential to real-world decision-making. Human cognition involves metacognitive processes, allowing us to assess uncertainty and distinguish between what we know and what we do not know. In contrast, current machine learning models often struggle to properly calibrate their confidence, even though they have achieved high accuracy in various task domains [1]. This miscalibration presents a significant challenge in real-world applications, such as autonomous driving or medical diagnosis, where incorrect decisions can have critical consequences. Although post-processing techniques have been used to address calibration issues, they require additional computational steps to obtain reliable confidence estimates. In this study, we show that random initialization — a common practice in deep learning — is a fundamental cause of miscalibration. We found that randomly initialized, untrained networks exhibit excessively high confidence despite lacking meaningful knowledge. This miscalibration at the initial stage prevents the alignment of confidence with actual accuracy as the network learns from data. To address this issue, we draw inspiration from the developmental brain, which is initialized through spontaneous neural activity even before receiving sensory inputs [2]. By mimicking this process, we pretrain neural networks with random noise [3] and demonstrate that this simple approach resolves the overconfidence issue, bringing initial confidence levels to near chance. This pre-calibration through random noise pretraining enables optimal calibration by aligning confidence levels with actual accuracy during subsequent data training. As a result, networks pretrained with random noise achieve significantly lower calibration errors compared to those trained solely with data. We also confirmed that this method generalizes well across different conditions, regardless of dataset size or network complexity. Notably, these pre-calibrated networks consistently identify “unknown data” by showing low confidence for outlier inputs. Our findings present a key solution for calibrating uncertainty in both in-distribution and out-of-distribution scenarios without the need for post-processing. This provides a fundamental approach to addressing miscalibration issues in artificial intelligence and may offer insights into the biological development of metacognition.
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grants (NRF-2022R1A2C3008991 to S.P.) and by the Singularity Professor Research Project of KAIST (to S.P.).
References
● Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks. InInternational Conference on Machine Learning(pp. 1321-1330). PMLR.
● Martini, F. J., Guillamón-Vivancos, T., Moreno-Juan, V., Valdeolmillos, M., & López-Bendito, G. (2021). Spontaneous activity in developing thalamic and cortical sensory networks.Neuron,109(16), 2519-2534.
● Cheon, J., Lee, S. W., & Paik, S. B. (2024). Pretraining with random noise for fast and robust learning without weight transport.Advances in Neural Information Processing Systems,37, 13748-13768.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P045: Cortical Microcircuit Modeling for Concurrent EEG-fMRI Recordings
Sunday July 6, 2025 17:20 - 19:20 CEST
P045 Cortical Microcircuit Modeling for Concurrent EEG-fMRI Recordings

Shih-Cheng Chien*1, Stanislav Jiříček1,2,3, Thomas Knösche4, Jaroslav Hlinka1,2, Helmut Schmidt1


1Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic
2National Institute of Mental Health, Klecany, Czech Republic
3Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
4Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany


*Email:chien@cs.cas.cz
Introduction

EEG and fMRI are widely used noninvasive methods for human brain imaging. Concurrent EEG-fMRI recordings help answer fundamental questions about the functional roles of EEG rhythms, their origin, and their relationship with the BOLD signal. Given the fact that both EEG and BOLD signals predominantly originate from postsynaptic potentials (PSPs) [1,2], and considering that distinct inhibitory neuron types influence EEG rhythms differently [3] and possess varied neurovascular coupling properties [4], a cortical microcircuit model incorporating multiple inhibitory neuron types would offer a promising framework for investigating local neural dynamics underlying EEG rhythms and their relationship with BOLD signals.

Methods
We developed a cortical microcircuit model that incorporates excitatory (E) and inhibitory (PV, SOM, and VIP) populations across cortical layers (L2/3, L4, L5, and L6) with realistic configurations, including connection probabilities, synaptic strengths, neuronal densities, and firing rate functions for each neuron type. The model receives three types of external inputs: (1) lateral input, (2) modulatory input, and (3) thalamic input. We characterized the spectral properties of EEG rhythms across a range of external inputs, explored EEG-BOLD correlations under constant and varying input conditions, and analyzed how neuronal populations contribute to EEG rhythms and the EEG-BOLD correlation.
Results
The model generates EEG rhythms, with increased power in the alpha (8-12 Hz), beta (13-30 Hz), and gamma bands (30-50 Hz) at low modulatory input and increased delta (0.5-4 Hz) and theta (4-7 Hz) powers at high modulatory input. We found low-frequency EEG activity (from delta to low beta band) was driven more strongly by infragranular than supragranular populations. Conversely, supragranular populations drive high-frequency EEG activity (high beta and gamma band) more intensely. As to EEG-BOLD correlations, we found that alpha-BOLD correlation is almost exclusively driven by fluctuations (i.e., standard deviation of firing rates) in infragranular populations, with little contribution from the supragranular layer.

Discussion
Our cortical microcircuit model generates EEG rhythms based on a generic mechanism involving the nonlinear amplification and filtering of synaptic noise. Our investigation focused on different forms of long-range external input, which targets distinct neuronal populations. The model could be used to help design optimal stimulation protocols for various applications, including the effect of specific neuronal populations on EEG and BOLD.




Acknowledgements
The publication was supported by a Lumina-Quaeruntur fellowship (LQ100302301) by the Czech Academy of Sciences (awarded to HS) and ERDF-Project Brain Dynamics, No. CZ.02.01.01/00/22_008/0004643. We acknowledge the core facility MAFIL supported by the Czech-BioImaging large RI project (LM2018129 funded by MEYS CR) for their support in obtaining scientific data presented in this work.
References
[1]https://doi.org/10.1016/j.brainresrev.2009.12.004
[2]https://doi.org/10.1016/j.cub.2018.11.052
[3]https://doi.org/10.1016/j.tins.2003.09.016
[4]https://doi.org/10.1523/JNEUROSCI.3065-04.2004
Speakers
HS

Helmut Schmidt

Scientific researcher, Institute of Computer Science, Czech Academy of Sciences
JH

Jaroslav Hlinka

Senior researcher, Institute of Computer Science of the Czech Academy of Sciences
Currently                                I am leading the COBRA working group and also serve as the Head of the Department of Complex Systems and as the Chair of the Council of the Institute of Computer Science of the Czech Academy of Sciences.Brief bio After obtaining master degrees in Psychology from Charles University (2005) and in Mathematics from Czech Technical University (2006), I went on the quest of applying mathematics in helping to understand the complex activity of human bra... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P046: Model Parameter Estimation for TMS-induced MEPs
Sunday July 6, 2025 17:20 - 19:20 CEST
P046 Model Parameter Estimation for TMS-induced MEPs

Shih-Cheng Chien*1, Christian Röse2, Peng Wang2,3, Helmut Schmidt1, Jaroslav Hlinka1,4, Thomas R. Knösche2, Konstantin Weise2


1Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic
2Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
3Institute of Psychology, University of Greifswald, Greifswald, Germany
4National Institute of Mental Health, Klecany, Czech Republic

*Email:chien@cs.cas.cz

Introduction

TMS-induced MEPs are widely utilized in both basic research and clinical practice. The MEP parameters, such as input-output (I/O) curves, often exhibit significant variability across individuals, both in healthy populations and in patients. Understanding the sources of this variability is critical for improving the precision of motor-related diagnoses. Previously, we developed a biologically inspired model capable of reproducing MEP waveforms. In this study, we apply model fitting to an open MEP dataset of ten healthy participants [1] and investigate the distribution of model parameters underlying the variability of I/O curves.

Methods
The model incorporates the descending motor pathways from the spinal cord to the hand muscles, with synthetic D- and I-waves serving as inputs. The spinal cord component consists of 100 conductance-based leaky integrate-and-fire alpha motor neurons (aMNs), which interact with a population of Renshaw cells (RCs) that function as a common inhibitory pool. The aMNs are connected to 100 motor units in the hand muscle component. Each motor unit generates a motor unit action potential (MUAP) in response to spikes from its corresponding aMN. The simulated MEP is computed as the sum of these time-shifted MUAPs.
Results
The resting motor threshold (RMT) across individuals in the dataset was 41.3 ± 6.0% of the maximum stimulator output (MSO). Peak latencies showed no significant variation with MEP peak-to-peak amplitude. Fitting the model to individual MEP waveforms provided insights into the neuronal interactions underlying MEP generation. The DI-waves, after convolution with synaptic kernels (AMPA and NMDA), produced sustained inputs to the αMNs. Renshaw cells played a critical role in suppressing excessive spikes, particularly under high TMS intensities, preventing excessive oscillations in the MEP waveform.
Discussion
We employed a computationally efficient and biologically plausible model to explain the variability in individual TMS-induced MEPs. The fitting procedure relied on synthesizing common DI-waves for healthy participants, which may introduce additional errors in parameter estimation. Future work will validate this approach using patient data, where individual DI-waves are available, to improve accuracy and robustness in parameter fitting.




Acknowledgements

The publication was supported by a Lumina-Quaeruntur fellowship (LQ100302301) by the Czech Academy of Sciences (awarded to HS) and ERDF-Project Brain Dynamics, No. CZ.02.01.01/00/22_008/0004643.
References
[1]https://doi.org/10.1016/j.brs.2022.06.013
Speakers
HS

Helmut Schmidt

Scientific researcher, Institute of Computer Science, Czech Academy of Sciences
JH

Jaroslav Hlinka

Senior researcher, Institute of Computer Science of the Czech Academy of Sciences
Currently                                I am leading the COBRA working group and also serve as the Head of the Department of Complex Systems and as the Chair of the Council of the Institute of Computer Science of the Czech Academy of Sciences.Brief bio After obtaining master degrees in Psychology from Charles University (2005) and in Mathematics from Czech Technical University (2006), I went on the quest of applying mathematics in helping to understand the complex activity of human bra... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P047: Graph Analysis of EEG Functional Connectivity during Lie Detection
Sunday July 6, 2025 17:20 - 19:20 CEST
P047 Graph Analysis of EEG Functional Connectivity during Lie Detection

Yun-jeong Cho1, Hoon-hee Kim*2

1 Department of Data Engineering, Pukyong National University, Busan, South Korea
2Department of Computer Engineering and Artificial Intelligence, Pukyong National University, Busan, South Korea

*Email: h2kim@pknu.ac.kr

Introduction

Lie detection is an important research topic in various fields, including psychology, forensic science, and neuroscience, as it involves complex processes. By measuring the brain's functional connectivity using EEG data and calculating the graph-theoretical metrics (e.g., the average clustering coefficient), it is possible to quantitatively assess the changes in brain network dynamics between lie and truth conditions [1]. In this study, we aimed to analyze the overall brain connectivity differences between lie and truth conditions by computing inter-channel coherence and network metrics within a specific frequency band during the answer phase.
Methods
Among 12 subjects, participants were divided into two groups —those who consistently lied and those who consistently told the truth. After excluding two lie subjects, each group comprised five subjects. EEG data were recorded for 15 seconds while subjects answered a specific question with only the first 3 seconds after answer onset analyzed. Inter-channel coherence [2] was computed in the high-frequency range, focusing on the beta band activated during lying. A functional connectivity (FC) matrix was constructed by applying a threshold, and key metrics —such as the average clustering coefficient and global efficiency were calculated. Statistical validation was performed using t-tests and Mann-Whitney U tests.
Results
Overall, significant differences in brain network metrics were observed between the lie and truth conditions (Fig. 1). In particular, the subjects in the lie group, the average clustering coefficient was found to increase significantly than in the subjects in the truth group. Statistical analyses confirmed that these differences were significant, with a larger than expected effect size, suggesting that overall brain connectivity is altered when individuals lie. These findings support the notion that the complex cognitive processes involved in lying may lead to changes in the brain’s network organization.
Discussion
This study compared the overall brain network changes between lie and truth conditions using the average clustering coefficient computed for each subject. The results showed that the lie condition exhibited increased global brain connectivity, suggesting an additional cognitive load during lying. However, using subject-level averages limits the ability to directly assess local connectivity changes in specific brain regions, and caution is warranted in interpretation due to the small sample size. Future research should include a larger number of subjects and incorporate various network metrics, such as inter-channel analyses, to more precisely evaluate brain connectivity changes.



Figure 1. Fig 1. Topographic maps of the average clustering coefficient comparing lie (left) and truth (right) groups during the answer phase. Increased clustering (dark red) in the lie condition indicates significantly greater overall brain connectivity compared to the truth condition.
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
1. Gao J, Gu L, Min X et al. (2022). Brain Fingerprinting and Lie Detection: A Study of Dynamic Functional Connectivity Patterns of Dedeption Using EEG Phase Synchrony Analysis. IEEE Journal of Biomedical and Health Informatics, 26(2), 600-613.https://doi.org/10.1109/jbhi.2021.3095415
2. Bowyer S. (2016). Coherence a measure of the brain networks: past and present. Neuropsychiatric Electrophysiology, 2(1).https://doi.org/10.1186/s40810-015-0015-7
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P048: From Density to Void: Why Brain Networks Fail to Reveal Complex Higher-Order Structures
Sunday July 6, 2025 17:20 - 19:20 CEST
P048 From Density to Void: Why Brain Networks Fail to Reveal Complex Higher-Order Structures

Moo K. Chung*1,Anass B El-Yaagoubi2, Anqi Qiu3, Hernando Ombao2


1Department of Biostatistics and Medical informatics, University of Wisconsin, Madison, USA
2Statistics Program, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
3Department of Health Technology and Informatics, Hong Kong, China


*Email:mkchung@wisc.edu

Introduction

In brain network analysis using resting-state fMRI, there is growing interest in modeling higher-order interactions—beyond simple pairwise connectivity—using persistent homology [1]. Despite the promise of these advanced tools, robust and consistently observed time-evolving higher-order interactions remain elusive. In this study, we examine why conventional analyses often fail to reveal complex higher-order structures—such as interactions involving four or five or more nodes —and explore whether higher-order interactions truly exist in functional brain networks.

Methods

We apply persistent homology to analyze correlation networks over a range of thresholds h. A simplicial complex is constructed from the connectivity matrix c(i,j) where nodes (0-simplices) represent individual time series and edges (1-simplices) are included if c(i,j) > h. For triangles (2-simplices), a simplex is formed if all three pairwise connections among a triplet of nodes exceed the threshold h. Higher-order simplices are defined analogously. We then examine the consistency of these higher-order topological features across time and subjects by quantifying the probability of overlap in the persistent features.


Results

Our preliminary analysis based on rs-fMRI of 400 subjects reveals that correlation networks tend to yield either nearly complete graphs or highly fragmented structures, neither of which exhibit robust higher-order topological features. As the number of nodes involved in an interaction increases, the probability that multiple brain regions activate simultaneously decays exponentially, as observed in both empirical data and theoretical models. These findings indicate that resting-state fMRI predominantly reflects pairwise interactions, with only infrequent occurrences of three-node interactions. Nonetheless, even these predominant pairwise interactions are highly intricate, giving rise to complex network dynamics characterized by lower-dimensional topological profiles such as 0D (connected components) and 1D (cycles) features [2].

Discussion
Our results indicate that conventional connectivity analyses are limited in detecting robust higher-order interactions, as they often yield networks that are either overly dense or fragmented, masking subtle connectivity patterns. Alternative metrics, such as mutual information or entropy, may better capture the nonlinear, multiscale dependencies among brain regions [3]. Notably, higher-order interactions are not exclusively defined by multi-node connectivity; even pairwise interactions can become highly complex when organized into cycles or spiral patterns over time. Future work should integrate these alternative measures with persistent homology to reveal hidden connectivity patterns, ultimately enhancing our understanding of functional brain organization.





Figure 1. Left: Graph representation of pairwise interactions between nodes in a brain network. Right: Higher-order interactions depicted with colored simplices—yellow for 3-node (triangle) interactions and blue for 4-node (tetrahedron) interactions.
Acknowledgements
NIH grants EB028753, MH133614 and NSF grant MDS-201077

References
[1] El-Yaagoubi, A.B., Chung, M.K., Ombao. H. (2023). Topological data analysis for multivariate time series data.Entropy,25(11), 1509.
[2]Chung, M.K., Ramos, C.G., De Paiva, F.B., Mathis, J., Prabhakaran, V., Nair, V.A., Meyerand, M.E., Hermann, B.P., Binder, J.R. and Struck, A.F., 2023. Unified topological inference for brain networks in temporal lobe epilepsy using the Wasserstein distance.NeuroImage,284, p.120436.
[3] Li, Q., Steeg, G. V., Yu, S., Malo, J. (2022). Functional connectome of the human brain with total correlation.Entropy,24(12), 1725.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P049: Planning and hierarchical behaviors in homeostatic optimal control
Sunday July 6, 2025 17:20 - 19:20 CEST
P049 Planning and hierarchical behaviors in homeostatic optimal control

Simone Ciceri*1,2, Atilla-Botond Kelemen1,3, Henning Sprekeler1,3,4


1Modelling of Cognitive Processes, Technical University of Berlin, Berlin, Germany
2Charité–Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany
3Bernstein Center for Computational Neuroscience, Berlin, Germany
4Science of Intelligence, Research Cluster of Excellence, Berlin, Germany

*Email:simone.ciceri@tu-berlin.de

Introduction
Animal survival depends on the ability to maintain the stability of a set of internal variables, such as nutrient levels or water balance. This internal regulation, known as homeostasis, often requires the acquisition of resources via interactions with the external environment [1]. We reasoned that competition among multiple homeostatic needs combined with a rich environment may be sufficient to explain a wide range of complex behaviors. To test this hypothesis, we developed a control-theoretic problem setting for an agent that aims to preserve homeostasis of multiple internal variables while foraging in environments with distributed resources.

Methods
We model a synthetic agent that actively forages to minimize deviations of its internal variables from their respective set points, which reflect its individual demands. These variables gradually decay over time but can be replenished by collecting resources from the environment. The resources are distributed around the environment to generate competition among different needs. We study the foraging behavior that results from minimizing a cost function that combines homeostatic errors and motion costs. In simple 1D environments, we obtain optimal behavioral policies using optimal control methods. In 2D settings, we parametrize the policies with artificial neural networks that are optimized using evolutionary algorithms.

Results
We show that internal homeostasis can generate a rich repertoire of behaviors that depend on both the structure of the environment and internal demands. First, when resources are sparse the agent displays planning strategies, such as stocking up on one variable before foraging for others. Second, agent behaviors can be decomposed into a small set of simpler policies, each of which satisfies one internal need. The agent hierarchically selects from this set of behaviors based on its internal state. Finally, optimal strategies can be highly sensitive to the agent's demands. In the same environment, we can observe sudden transitions between different behaviors when changing the set point at which the internal variables need to be maintained.

Discussion
Our model demonstrates the possible emergence of complex behavior from the simple goal of internal stability. Optimal foraging strategies are shaped by both environmental factors and internal demands, potentially accounting for the large variability often observed among individuals of the same species, even within the same environment. Our model also emphasizes how strongly the dynamics of the internal state—which are generally not accessible in behavioral experiments—are mirrored in the agent's behavior. The relevance of these findings is not confined to behavioral modeling and analysis: it is likely that the neural activity that drives animal behaviors will be similarly sensitive to the internal state of the animal.





Acknowledgements
-
References
● Woods, S. C., & Ramsay, D. S. (2007). Homeostasis: Beyond Curt Richter. Appetite, 49(2), 388-398.https://doi.org/10.1016/j.appet.2006.09.015


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P050: Linking biological context to models with ModelDB
Sunday July 6, 2025 17:20 - 19:20 CEST
P050 Linking biological context to models with ModelDB

InessaCohen1,XinchengCai2,MengmengDu2,YitingKong2,HongyiYu2,Robert A. McDougal*1,2,3,4
1Program in Computational Biology and Biomedical Informatics, Yale University, New Haven, CT, USA
2Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
3Department of Biomedical Informatics and Data Science, YaleSchool of Medicine, New Haven, CT, USA
4Wu Tsai Institute, Yale University, New Haven, CT, USA


*Email: robert.mcdougal@yale.edu
Introduction

ModelDB (https://modeldb.science) was founded almost 30 years ago to address the challenges of reproducibility in computational neuroscience, promote code reuse, and facilitate model discovery. It has grown to hold the source code for ~1,900 published studies. Recent enhancements, presented here, have focused on expanding its model collection and improving its biological context. However, discoverability and interpretability depend on having reliable metadata for entire models and their components. To address this, we sought to use machine learning (ML) to classify ion channel subtypes based on source code, identify key predictors, and compared results to those from a large language model (LLM).
Methods
We applied manual and automatic techniques to increase the biological context displayed when exploring ModelDB as well as increased the visibility of existing data. Network model properties and some file-level ion channel types were manually annotated. Biology-focused explanations of model files were generated automatically by an LLM. Features were extracted using a rule-based approach from NEURON [1] MOD files (a common format of ion channel and receptor model components) after deduplication that ignored white space and comments. Five-fold cross-validation was used to assess ML predictions. Subsets of model code from many files and a controlled vocabulary were provided to an LLM to generate whole-model metadata which was assessed manually.
Results
We have updated the ModelDB website to support more types of models and to pair browsing models and files with biological and computational context. The ML classifier identified a number of features (state count, nonspecific currents, using common ions) as key for predicting ion channel type. It worked well for identifying broad channel types but struggled with more granular subtype identification which had few examples in our training set. Calcium-activated potassium channels were one of the best performing subtypes. ML results were compared with those from an LLM and from rule-based approaches. LLM performance on whole model metadata prediction from source code was highly dependent on the broad category of metadata.
Discussion
ModelDB has long prioritized connecting models to biology, from its days as part of the SenseLab project, where its sister-site NeuronDB [2] once gathered compartment-level channel expression data. Many model submitters now chose to contribute an “experimental motivation” when submitting new models. Biology and model code are both often unclear on what should count as “the same,” posing challenges for both manual and automated metadata assignment. Nevertheless, it is our hope that pairing code with enriched biological context will make computational models more accessible, interpretable, and reusable.



Acknowledgements
We thankRui Lifor curatingModelDBmodel network metadata.
References
1.Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment.Neural computation, 9(6), 1179-1209.https://doi.org/10.1162/neco.1997.9.6.1179
2.Mirsky, J. S., Nadkarni, P. M., Healy, M. D., Miller, P. L., & Shepherd, G. M. (1998). Database tools for integrating and searching membrane property data correlated with neuronal morphology.Journal of neuroscience methods, 82(1), 105-121.https://doi.org/10.1016/S0165-0270(98)00049-1
Speakers
avatar for Robert McDougal

Robert McDougal

Assistant Professor, Yale University, USA
Looking for a postgrad or postdoc position implementing simulation methods? I'm hiring.I'm an Assistant Professor in the Health Informatics division of Biostatistics, and a developer for NEURON and ModelDB. Computationally and mathematically, I'm interested in dynamical systems modeling... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P051: Dynamical systems principles underly the ubiquity of neural data manifolds
Sunday July 6, 2025 17:20 - 19:20 CEST
P051 Dynamical systems principles underly the ubiquity of neural data manifolds

Isabel M. Cornacchia*1, Arthur Pellegrino*1,2, Angus Chadwick1

1 Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
2Gatsby Computational Neuroscience Unit, School of Life Sciences, University College London, UK


*Email: isabel.cornacchia@ed.ac.uk, a.pellegrino@ucl.ac.uk

Introduction

The manifold hypothesis posits that low-dimensional geometry is prevalent in high-dimensional data. In neuroscience, such data emerge from complex interactions between neurons, most naturally described as dynamical systems. While these models offer mechanistic descriptions of the processes generating the data, the geometric perspective remains largely empirical, relying on dimensionality reduction methods to extract manifolds from data. The link between the dynamic and geometric views on neural systems therefore remains an open question. Here, we argue that modelling neural manifolds in a differential geometric framework naturally provides this link, offering insights into the structure of neural activity across tasks and brain regions.

Methods
In this work, we argue that many manifolds observed in high-dimensional neural systems emerge naturally from the structure of their underlying dynamics. We provide a mathematical framework to characterise the conditions for a dynamical system to be manifold-generating. Using the framework, we verify in datasets that such conditions are often met in neural systems. Next, to investigate the relationship between the dynamics and geometry of neural population activity, we apply this framework to jointly infer both the manifold and the dynamics on it directly from large-scale neural recordings.
Results
In recordings of macaque motor and premotor cortex during a reach task [1], we uncover a manifold with behaviourally-relevant geometry: neural trajectories on the inferred manifold closely resemble the hand movement of the animal, without a need to explicitly decode the behaviour. Furthermore, from 2-photon imaging of mouse visual cortex during a visual discrimination task [2], we show that neurons tracked over one month of learning have a stable curved manifold shape, despite the neural dynamics changing. In these two example datasets, we show that considering the curvature of neural manifolds and dynamics on them allows to extract more behaviourally relevant neural representations and to probe for their change over learning (Fig. 1).
Discussion
Overall, our framework offers a formal mathematical link between the geometric and dynamical perspectives on population activity, and provides a generative model to uncover task manifolds from experimental data. We use this framework to highlight how behavioural and stimulus variables are naturally encoded on curved manifolds, and how this encoding evolves over learning. This lays the mathematical groundwork for systematically modelling neural manifolds in the language of differential geometry, which can be reused across tasks and brain regions. Overall, bridging geometry and dynamics is a key step towards a unified view of neural population activity which can be used to generate and test hypotheses about neural computations in the brain.



Figure 1. a. The framework (MDDS) jointly fits the manifold and dynamics to data. b. Reach task. c. Inferred manifold and trajectories within it. d. Visual task. e. Neural representation of the angle over time. f. Variance explained by a model trained on pre-learning and (top): tested on pre-learning (bottom): tested on post-learning while refitting components, either separately or in combination.
Acknowledgements

References
1.https://doi.org/10.1016/j.neuron.2018.09.030
2.https://doi.org/10.1038/s41593-021-00914-5


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P052: Geometry and dynamics in sinusoidally perturbed cortical networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P052 Geometry and dynamics in sinusoidally perturbed cortical networks

Martina Cortada*1,2, Joana Covelo1, Maria V. Sanchez-Vives1,3

1Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Carrer de Rosselló, 149-153, 08036 Barcelona, Spain

2Facultat de Física, Universitat de Barcelona (UB), Carrer de Martí i Franquès, 1-11, 08028 Barcelona,
Spain
3Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluís Companys, 23,
08010 Barcelona, Spain
*Email: cortada@recerca.clinic.cat


Introduction
Cerebral cortex networks exemplify complex coupled systems where collective behaviorsgiverise to emergent properties.This study explores how electric fieldsinusoidal modulationimpactcortical networksexhibitingself-sustained slow oscillations (SOs), characterized by alternating neuronal silence (Down states) and activity (Up states) around 1 Hz[1,2].SOs, describedas thecorticaldefaultactivity pattern[3],arecrucial formemory consolidation,plasticityandhomeostasis[4].
Here, we aimed to understand SOs and how to control them. Specifically, how the amplitudeand frequency of sinusoidal electric fields shape emergent network states and the transitions across them?
Methods
Wevariedfrequencies and amplitudesof sinusoidal fieldson cortical networksexhibitingspontaneous SOs. These networks form a coupled system where intrinsic oscillations interact with an external periodic force. To characterize their response, we define a suitably reduced phase space in which trajectoriesemergefrom the interaction between the perturbation and the network’s activity. These trajectories are constructed by segmenting the network response into single-cycle epochs corresponding to the perturbation, mapping each oscillatory response into a structured, low-dimensional representation. The system’s behavior is then analyzed through the evolution of these trajectories within this phase space, using geometric and topological approaches.

Results
When sinusoidally perturbed, these networksexhibitdistinct qualitative behaviors shaped by the interplay between intrinsic oscillations and external driving forces. By examining thetrajectoriesrepresentingthis interplay, we found that the Euclidean distance between their start and end points distinguishes different dynamical regimes, including phase and frequencylocking, quasi-periodicityand desynchronization.
Beyond trajectory closure, the intricate patterns of these curves across stimulation conditionsindicatethe existence of multiple stable or metastable regimes, suggesting that external forcing can drive transitions between distinct attractor-like states in cortical dynamics.
Discussion
Through this analysis, we have explored how perturbations shape network responses across the parameter space. Our findings suggest that cortical networks encode these effects through the geometric structure of their dynamical trajectories, revealing patterns of entrainment and stability under electric field modulation. This framework deepens our understanding of coupled neural oscillators and how they can be controlled, which has important implications for neuromodulation strategies in clinical contexts.




Acknowledgements
Funded by INFRASLOWPID2023-152918OB-I00 funded by MICIU / AEI / 10.13039/501100011033/FEDER. Co-funded byEuropean Union (ERC, NEMESIS, project number 101071900)andDepartamentdeRecercaiUniversitatsde laGeneralitatde Catalunya (AGAUR 2021-SGR-01165), supported by FEDER.

References
[1] M.V. Sanchez-Vives.Current Opinion in Physiology, vol. 15, 2020, pp. 217–223.
[2] M.Torao-Angosto et al.Frontiers in Systems Neuroscience, vol. 15, 2021.
[3] M.V. Sanchez-Vives and M. Mattia.ArchivesItaliennesdeBiologie, vol. 158, no. 1, 2020, pp. 59–65.doi:10.12871/000398292020112.
[4] J.M. Krueger et al.Sleep Medicine Reviews, vol. 28, 2016, pp. 46–54.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P053: Modular structure-function support high-order interactions in the human brain
Sunday July 6, 2025 17:20 - 19:20 CEST
P053 Modular structure-function support high-order interactions in the human brain

Jesus M Cortes1,2,3, Borja Camino-Pontes1,4, Antonio Jimenez-Marin1,4, Iñigo Tellaetxe-Elorriaga1,4, Izaro Fernandez-Iriondo1,4, Asier Erramuzpe1,2, Ibai Diez1,2,5, Paolo Bonifazi1,2, Marilyn Gatica6,7, Fernando Rosas8,9,10,11, Daniele Marinazzo12, Sebastiano Stramaglia13,14
*Email: jesus.m.cortes@gmail.com
1Computational Neuroimaging Lab, BioBizkaia Health Research Institute, Barakaldo, Spain
2IKERBASQUE: The Basque Foundation for Science, Bilbao, Spain
3Department of Cell Biology and Histology, Faculty of Medicine and Nursing, University of the Basque Country, Leioa, Spain
4Biomedical Research Doctorate Program, University of the Basque Country, Leioa, Spain
5Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
6NPLab, Network Science Institute, Northeastern University London, London, United Kingdom.
7Precision Imaging, School of Medicine, University of Nottingham, United Kingdom.
8Department of Informatics, University of Sussex, Brighton, United Kingdom.
9Sussex Centre for Consciousness Science and Sussex AI, University of Sussex, Brighton, United Kingdom.
10Center for Psychedelic Research and Centre for Complexity Science, Department of Brain Sciences, Imperial College London, London, UK.
11Center for Eudaimonia and Human Flourishing, University of Oxford, Oxford, United Kingdom.
12Department of Data Analysis, Ghent University, Ghent, Belgium.
13Università degli Studi di Bari Aldo Moro, Bari, Italy.
14INFN, Sezione di Bari, Italy.
Introduction

The brain exhibits a modular organization across structural (SC) and functional connectivity (FC), spanning multiple scales from microcircuits to large-scale networks. While SC and FC share similarities, FC fluctuates over shorter time scales. Structure-function coupling (SFC) examines statistical dependencies between SC and FC [1], often at the link-wise level. However, modular coupling offers a multi-scale approach to understanding SC-FC interactions [2-3]. This study integrates functional MRI and diffusion-weighted imaging to investigate modular SFC and the role of high-order interactions (HOI) in functional organization.




Methods
We analyzed SC and FC from multimodal neuroimaging data, using graph-based modular decomposition to assess brain network structure. To quantify HOI, we computed O-information [4], assessing redundancy and synergy among brain regions. HOI gradients were also derived to explore the organization of these interactions [5]. We then examined the coupling between modular SC and both redundancy and synergy, identifying statistical associations that reveal how structural networks relate to functional integration and segregation.


Results & Discussion
Our findings indicate that SC is linked to both redundant and synergistic functional interactions at the modular level. SC showed both positive and negative correlations with redundancy, suggesting that stronger structural connections within a module can either amplify or reduce functional redundancy. In contrast, synergy consistently exhibited a positive correlation with SC, indicating that increased SC density promotes synergistic interactions. These results refine our understanding of structure-function relationships, highlighting how SC modulates HOI in the brain’s modular architecture.





Acknowledgements
JMC acknowledges financial support from Ikerbasque: The Basque Foundation for Science, and from Spanish Ministry of Science (PID2023-148012OB-I00), Spanish Ministry of Health (PI22/01118), Basque Ministry of Health (2023111002 & 2022111031).
References
[1]https://doi.org/10.1038/s41583-024-00846-6
[2]https://doi.org/10.1038/srep10532
[3]https://doi.org/10.1002/hbm.24312
[4]https://doi.org/10.1103/PhysRevE.100.032305
[5]https://doi.org/10.1103/PhysRevResearch.5.013025
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P054: Integrating Arbor and TVB for multi-scale modeling: a novel co-simulation framework applied to seizure generation and propagation
Sunday July 6, 2025 17:20 - 19:20 CEST
P054 Integrating Arbor and TVB for multi-scale modeling: a novel co-simulation framework applied to seizure generation and propagation

Thorsten Hater*1,Juliette Courson2, Han Lu1, Sandra Diaz Pier1, Thanos Manos2


1Jülich Supercomputing Centre, Forschungszentrum Jülich
2ETIS Lab, ENSEA, CNRS, UMR8051, CY Cergy-Paris University, Cergy, France
3Department of Computer Science, University of Warwick, Coventry, UK
*Email: t.hater@fz-juelich.de
Introduction
Computational neuroscience has traditionally focused on isolated scales, limiting our understanding of brain function across multiple levels. Microscopic models capture biophysical details of neurons, while macroscopic models describe large-scale network dynamics. However, integrating these levels into a unified framework remains a significant challenge.Methods
We present a novel co-simulation framework integratingArborandThe Virtual Brain (TVB). Arbor, a next-generation neural simulator, enables biophysically detailed simulations of single neurons and networks [1], while TVB models whole-brain dynamics based on anatomical connectivity [2]. Our framework employs anMPI intercommunicatorfor real-time bidirectional interaction, converting discrete spikes from Arbor into continuous activity in TVB, and vice versa. This approach allows for the replacement of TVB nodes with detailed neuron populations, enabling multi-scale modeling of brain dynamics.Results
To demonstrate the framework’s capabilities, we conducted a case study on seizure generation at the neuronal level and its whole-brain propagation [3,4]. The Arbor-TVB co-simulation successfully captured the emergence of seizure activity in single neurons and its large-scale spread across the brain network, highlighting the feasibility of integrating micro- and macro-scale dynamics.Discussion
The Arbor-TVB framework provides a comprehensive computational tool for studying neural disorders and optimizing treatment approaches. By capturing interactions across spatial scales, this method enhances our ability to investigate how local biophysical mechanisms influence global brain states. This multi-scale approach advances research in computational neuroscience, offering new possibilities for therapeutic testing and precision medicine interventions for neurological disorders.





Acknowledgements
.
References
[1]doi:10.1109/empdp.2019.8671560
[2]doi:10.3389/fninf.2013.00010
[3]doi:10.1523/jneurosci.1091-13.2013
[4]doi:10.1007/s10827-022-00811-1
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P055: Risk sensitivity modulates impulsive choice and risk aversion behaviors
Sunday July 6, 2025 17:20 - 19:20 CEST
P055 Risk sensitivity modulates impulsive choice and risk aversion behaviors

Rhiannon L. Cowan*1, Tyler S. Davis1, Bornali Kundu2, Ben Shofty1, Shervin Rahimpour1, John D. Rolston3, Elliot H. Smith1

1Department of Neurosurgery, University of Utah, Salt Lake City, United States
2Department of Neurosurgery, University of Missouri – Columbia, Missouri, United States
3Department of Neurosurgery, Brigham & Women’s Hospital, Boston, United States

*Email: rhiannon.cowan@utah.edu


Introduction
Impulsivity is a multifaceted psychological construct that may impede optimal decision-making. Impulsive choice (IC) is the tendency to favor smaller, immediate, or more certain rewards over larger, delayed, or uncertain rewards [1]. A strategy such as risk aversion allows individuals to avoid potential loss of reward and gain instant gratification [2,3]. Risk sensitivity (RS) is defined as the variance associated with an outcome [4] and, therefore, may be examined via positive and negative prediction error (PE) signals, a canonical signal of reinforcement learning [5-7]. We posit that more impulsive individuals will exhibit risk-aversive tendencies, which will be observed via suboptimal performance and neural encoding of negative PEs.

Methods
71 neurosurgical epilepsy patients underwent implantation of electrodes into the cortex and deep brain structures. The Balloon Analog Risk Task (BART) is a useful paradigm to measure impulsivity and reward behaviors by conceptualizing the probability of potential reward [8]. Subject IC level was calculated as the difference between passive and active trial inflation time (IT) distributions. Outcome-aligned broadband high frequency (HFA; 70-150Hz) activity was modeled as a linear combination of temporal difference (TD) variables [5,9,10]. We compute the neural correlates of reward in a trial-by-trial manner from TD models with optimal learning rates [11] and RSTD models, which account for positive and negative PEs [12].
Results
MI choosers were more accurate than LI choosers (Z=2.04, p=.041), notably for yellow balloon trials (Z=4.09, p<.0001), yet LI choosers overall gained more points (Z=-3.57, p=.00036) primarily from yellow balloons (Z=-3.58, p=.00036; Fig.1). We observed no differences in optimal learning rates for reward or risk models between groups (p’s>.05) but saw increased RS was correlated with impulsivity (t(69)=-2.17, p=.03). We observed greater encoding of negative PEs (11.46%) than positive PEs (25.06%; χ2=159, p<.001). However, a group-level dichotomy revealed that MI choosers encoded significantly more negative RPEs (MI=11.42%, LI=9.45%; χ2=5, p=.025), whereas LI choosers encoded more positive PEs (χ2=4, p=.039).

Discussion
We utilize a dataset of 7000 intracranial electrodes to model RS and the neural underpinnings of IC. During BART, we found that LI choosers took more risks, leading to more optimal performance, while MI chooser’s accuracy-point tradeoff suggests a risk-aversion strategy, that aligns with the IC definition. Neurally, MI choosers encoded more negative PEs, and LI choosers encoded more positive PEs, which, in tandem with the differential behavioral strategies exhibited, suggests RS drives reward-seeking and may be modulated by impulsivity. This supports previous studies showing associations of positive PEs to risk-seeking behavior and negative PEs to risk-aversion behavior [13]. These findings have implications for decision-making, RS, and IC.





Figure 1. Figure 1. A. BART schematic B&C. IC scatter & histogram using Z-Value difference between active & passive ITs (apZVals) D. Accuracy by color E. Points by color F. LI & MI point distributions G-I. Regression plots: performance vs IC J. Glass brain of electrodes K&L. LI & MI regions encoding negative PE & positive PE M&N. LI & MI risk PE signals by trial category O. Risk sensitivity vs impulsivity.
AcknowledgementsThis research was supported by funding: R01MH128187
References
1. https://doi.org/10.1097/01.wnr.0000132920.12990.b9
2. https://doi.org/10.3389/fpsyg.2015.0051
3. https://doi.org/10.1016/j.bbr.2018.10.008
4. https://doi.org/10.1523/JNEUROSCI.5498-10.2012
5. https://doi.org/10.1016/j.neuron.2006.06.024
6. https://doi.org/10.1038/s41586-019-1924-6
7. https://doi.org/10.31887/DCNS.2016.18.1/wschultz
8. https://doi.org/10.1037//1076-898X.8.2.75
9. https://doi.org/10.1523/JNEUROSCI.2041-09.2009
10. https://doi.org/10.1523/JNEUROSCI.2770-10.2010
11. doi.org/10.1109/TNN.1998.712192
12. https://doi.org/10.1023/A:1017940631555
13. https://doi.org/10.1371/journal.pcbi.1009213

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P056: A numerical simulation of neural fields on curved geometries
Sunday July 6, 2025 17:20 - 19:20 CEST
P056 A numerical simulation of neural fields on curved geometries

Neekar Mohammed, David J. Chappell,Jonathan J. Crofts*
Department of Physics and Mathematics, Nottingham Trent University, Nottingham, UK

*Email: jonathan.crofts@ntu.ac.uk


Introduction
Brainwaves are crucial for information processing, storage, and sharing [1]. While a plethora of computational studies exist, the mechanisms behind their propagation remain unclear. Current models often simplify the cortex as a flat surface, ignoring its complex geometry [2]. In this study, we incorporate realistic brain geometry and connectivity into simulations to investigate how brain morphology influences wave propagation. Our goal is to leverage this approach to elucidate the relationship between increasing mammalian brain convolution, brain evolution, and its consequential impact on cognition.

Methods
To achieve efficient modelling of large-scale cortical structures, we have extended isogeometric analysis (IGA) [3], a powerful tool for physics-based engineering simulations, to the complex nonlinear integro-differential equation models found in neural field models. IGA utilises non-uniform rational B-splines (NURBS), the standard for geometry representation in computer-aided design, to approximate solutions. Specifically, we will employ isogeometric collocation (IGA-C) methods, leveraging the high accuracy of NURBS with the computational efficiency of collocation. While IGA-C has proven effective for linear integral equations in mechanics and acoustics, its application to nonlinear NFMs represents a significant advancement.
Results
To enable more realistic brain simulations, we have developed a novel IGA-C method that directly utilises point cloud data and bypasses triangular mesh generation, allowing for the solution of partial integro-differential equation models of neural activity on complex cortical-like domains. Here, we demonstrate the method's capabilities by studying both localised and traveling wave activity patterns in a two-dimensional neural field model on a torus [4]. The model offers a significant computational advantage over standard mesh-dependent methods and, more importantly, provides a crucial framework for future research into the role of cortical geometry in shaping neural activity patterns via its ability to incorporate complex geometries.
Discussion
This work presents a novel numerical procedure for integrating neural field models on arbitrary two-dimensional surfaces, enabling the study of physiologically realistic systems. This includes, for example, accurate cortical geometries and connectivity functions that capture regional heterogeneity. Future research will focus on elucidating the influence of curvature on the nucleation and propagation of travelling wave solutions on cortical geometries derived from imaging studies.




Acknowledgements
NM, DJC and JJC were supported through the Leverhulme Trust research project grant RPG-2024-114
References
1.https://doi.org/10.1038/nrn.2018.20
2.https://doi.org/10.1007/s00422-005-0574-y
3.https://doi.org/10.1016/j.cma.2004.10.008
4.https://doi.org/10.1007/s10827-018-0697-5
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P057: Biologically Interpretable Machine Learning Approaches for Analyzing Neural Data
Sunday July 6, 2025 17:20 - 19:20 CEST
P057 Biologically Interpretable Machine Learning Approaches for Analyzing Neural Data

Madelyn Esther C. Cruz*1,2, Daniel B. Forger1,2,3

1Department of Mathematics, University of Michigan, Ann Arbor, MI, USA
2Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
3Michigan Center for Interdisciplinary and Applied Mathematics, University of Michigan, Ann Arbor, MI, USA

*Email:mccruz@umich.edu
Introduction

Deep neural networks (DNNs) often achieve impressive classification performance, but they operate as "black boxes,” making them challenging to interpret [1]. They may also struggle to capture the dynamics of time-series data, such as electroencephalograms (EEGs), because of their indirect handling of temporal information. To address these challenges, we explore the use of Biological Neural Networks (BNNs), machine learning models inspired by the brain’s physiology, on neuronal data. By leveraging biophysical neuron models, BNNs offer better interpretability by closely modeling neural dynamics, providing insights into how biological systems generate complex behavior.
Methods
This study applies backpropagation to networks of biophysically accurate mathematical neuron models to develop a BNN model. Specifically, we define a BNN architecture using modified versions of the Hodgkin–Huxley model [2] and integrate this within traditional neural network algorithms. These BNNs are then used to classify both EEG and non-EEG signals, generate EEG signals to predict brain states, and analyze EEG neurophysiology through model-derived parameters. We also compare the performance of our BNN architecture to those of traditional neural networks.
Results
Our BNNs demonstrate strong performance in classifying handwritten digits from the MNIST Digits Dataset, learning faster than traditional neural networks. The same BNN architecture also excels on time-series neuronal datasets, effectively distinguishing EEG recordings and power spectral densities associated with alertness vs. fatigue, varying consciousness levels, and different workloads. Additionally, we trained our BNNs to exhibit different frequencies observed in EEG recordings and found that the variability of synaptic weights and applied currents increased with the target frequency range.
Discussion
Analyzing gradients from backpropagation in BNNs reveals similarities between their learning mechanisms and Hebbian learning in the brain in terms of how synaptic weights change the loss function and how changing the weights at specific time intervals impact learning. In particular, synaptic weight updates occur only when presynaptic or postsynaptic neurons fire [3]. This results in fewer parameter changes during training compared to DNNs while still capturing temporal dynamics, leading to improved learning efficiency and interpretability. Overall, applying backpropagation to accurate ordinary differential equation models enhances neuronal data classification and interpretability while providing insights into brain learning mechanisms.



Acknowledgements
We acknowledge the following funding: ARO MURI W911NF-22-1-0223 to DBF.
References
● http://doi.org/10.1038/nature14539
● https://doi.org/10.1007/s00422-008-0263-8
● https://doi.org/10.1016/j.neucom.2014.11.022


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P058: Homeostatic memory engrams in mesoscopic structural connectomes
Sunday July 6, 2025 17:20 - 19:20 CEST
P058 Homeostatic memory engrams in mesoscopic structural connectomes

Fabian Czappa(1, *)
Marvin Kaster (1)

Marcus Kaiser (2)
Markus Butz-Ostendorf (1,3)
Felix Wolf (1)

(1) Laboratory for Parallel Programming, Department of Computer
Science, Technical University of Darmstadt, Hochschulstraße. 10,
Darmstadt, 64285, Hesse, Germany


(2) Translational Neuroimaging, Faculty of Medicine & Health Sciences,
University of Nottingham, NG7 2RD, Nottingham, United Kingdom

(3) Translational Medicine and Clinical Pharmacology, Boehringer Ingelheim Pharma GmbH & Co. KG Birkendorfer Straße 65 88397 Biberach/ Riss, Baden-Wuerttemberg, Germany


(*)fabian.czappa@tu-darmstadt.de
Introduction

Memory engrams are defined as physical traces of memory [1]. However, their actual representation in the brain is still unknown. To shed light on the underlying mechanisms, we simulate the formation of memories in healthy human subjects based on connectomes extracted from their DT-MRI brain scans. To prepare the networks for learning, we first bring them into a state of homeostatic equilibrium, leaving their topology largely intact [2]. Once this homestatization is complete, we perform a memory experiment by stimulating groups of neurons and observing memory formation as the network changes its structure to maintain equilibrium [3]. After our "thought experiment", we can precisely locate the memory engram in the connectome.
Methods
We use the Model of Structural Plasticity (MSP) [4], which grows and retracts synaptic elements based on a homeostatic rule. When a neuron searches for a partner, it chooses one based on the number of free synaptic elements and a distance-dependent probability kernel. We augment the original kernel at longer distances, giving preference to the vicinity of established synapses. After homeostatizing the structural connectome with the augmented MSP, we select a group of concept cells (CC) from the middle temporal lobe and two groups of neurons C1 and C2 scattered outside this region. We then perform a Hebbian learning experiment, associating CC with C1. We perform our experiments using data from n=7 healthy human subjects [5].
Results
Homeostatizing the connectome brings the node-degree distribution from a power-law to a normal distribution, yet we keep many distinguishing features of the network. The (geometric) axon-length histogram, the small-worldness, and the assortativity – among others – are comparable between the scanned connectome and the homeostatized one. Furthermore, we see that we form a memory engram after picking neurons for CC, C1, and C2 and stimulating CC and C1 together. Testing with n=7 high-resolution connectomes, we see that the memory engram is located in specific brain areas such as the inferior parietal lobule (7 times), the superior temporal lobe (7 times), but only sometimes in the fusiform gyrus (4 times); see Figure 1 for details.
Discussion
For the first time, it is now possible to conduct brain simulations based on individual brain scans without parameter fitting. Using MSP-generated avatar connectomes of healthy subjects that were topologically similar to the original tractograms, our method ensured the functioning of model neurons in a physiological regime, which was the necessary precondition for the learning experiments. The proposed approach is the starting point of various testable and personalized brain simulations, from designing novel stimulation protocols for transcranial stimulations (TMS, tDCS) to innovative AD models exploring the causal relationship between homeostatic imbalance, network decay, and cognitive decline.




Figure 1. Caption: We evaluate our model on n=7 high-resolution structural connectomes of healthy adults. Shown here are the number of connectomes in which US created an engram within the C1/C2 group within the area. Our criterion is that the firing frequency of the readout neuron is larger than three times the standard deviation of its usual firing frequency.
AcknowledgementsThe authors thank the German Federal Ministry of Education and Research and the Hessian Ministry of Science and Research, Art and Culture for supporting this work as part of the NHR funding. Moreover, the authors acknowledge the computing time provided to them on the HPC Lichtenberg II at TU Darmstadt, funded by the German Federal Ministry of Education and Research and the State of Hesse.


References
[1]https://doi.org/10.1016/s0361-9230(99)00182-3
[2] https://doi.org/10.1016/j.neuroimage.2009.10.003
[3]https://doi.org/10.3389/fninf.2024.1323203
[4]https://doi.org/10.1371/journal.pcbi.1003259
[5] https://doi.org/10.1002/hbm.25464


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P059: Modeling Cholinergic Heterogeneity in Whole-Brain Cortical Dynamics: Bridging Local Circuits to Global State Transitions
Sunday July 6, 2025 17:20 - 19:20 CEST
P059 Modeling Cholinergic Heterogeneity in Whole-Brain Cortical Dynamics: Bridging Local Circuits to Global State Transitions

Leonardo Dalla Porta*1, Jan Fousek2, Alain Destexhe3, Maria V. Sanchez-Vives1,4

1Institute of Biomedical Research August Pi i Sunyer (IDIBAPS), Barcelona, Spain
2Central European Institute of Technology (CEITEC), Masaryk University, Brno, Czech Republic
3Institute of Neuroscience (NeuroPSI), Paris-Saclay University, Paris, France
4Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Spain


*Email: dallaporta@recerca.clinic.cat

Introduction
The wake-sleep cycle consists of fundamentally distinct brain states, including slow-wave sleep (SWS) and wakefulness. During SWS, the cerebral cortex exhibits large, low-frequency fluctuations that propagate as traveling waves [1]. In contrast, wakefulness is characterized by the suppression of low-frequency activity and the emergence of asynchronous, irregular dynamics. While neurotransmitters such as acetylcholine (ACh) are known to regulate the wake-sleep cycle, the mechanisms by which local neuronal interactions give rise to large-scale brain activity patterns are still an open question [2].
Methods
Here, we integrated local circuit properties [2] with global brain dynamics in a whole-brain model [3], constrained by human tractography and cholinergic gene expression. Using a mean-field model, cortical regions incorporated intrinsic excitatory and inhibitory neuronal properties. Connectivity among different brain regions was determined by structural tractography from the human connectome. Cholinergic heterogeneity was introduced using the Allen Human Brain Atlas [4], which quantifies transcriptional activity for over 20,000 genes. M1 and M2 muscarinic receptors, which are targets of ACh, were incorporated by adjusting local node properties, thus creating a detailed virtual brain landscape.
Results
Our model successfully replicated spontaneous slow oscillation patterns and their wave propagation properties, as well as awake-like dynamics. Heterogeneity influenced cortical properties, modulating excitability, synchrony, and the relationship between functional and structural connectivity. Additionally, we quantified global brain complexity in response to stimulation using the Perturbational Complexity Index (PCI) [5] to differentiate brain states and assess the impact of cholinergic heterogeneity on evoked activity. We observed a significant increase in complexity during awake-like states, which depended on the level of heterogeneity.
Discussion
Building on prior insights into cholinergic modulation in local circuits [2], we developed a whole-brain model constrained by muscarinic receptor distributions, bridging intrinsic neuronal properties to large-scale brain activity. Overall, our findings underscore the impact of cholinergic heterogeneity on global brain dynamics and transitions across brain states, shaping the spatiotemporal complexity of neural patterns and functional interactions across cortical areas. Moreover, our approach also offers a pathway to studying the role of various neuromodulators involved in brain state regulation.



Acknowledgements
EU H2020 No. 945539 (Human Brain Project SGA3); INFRASLOW PID2023-152918OB-I00 funded by MICIU / AEI / 10.13039/501100011033/FEDER, UE; ERC SyG grant NEMESIS 101071900
References
1.https://doi.org/10.1523/jneurosci.1318-04.2004
2.https://doi.org/10.1371/journal.pcbi.1011246
3.https://doi.org/10.3389/fncom.2022.1058957
4.https://doi.org/10.1038/nature11405
5.https://doi.org/10.1126/scitranslmed.3006294
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P060: Neural Dynamics and Non-Linear Integration: Computational Insights for Next-Generation Visual Prosthetics
Sunday July 6, 2025 17:20 - 19:20 CEST
P060 Neural Dynamics and Non-Linear Integration: Computational Insights for Next-Generation Visual Prosthetics

Tanguy Damart*1, Jan Antolík1

1Faculty of Mathematics and Physics, Charles University, Prague, The Czech Republic

*Email: tanguy.damart@protonmail.com
Introduction

Eliciting percepts akin to natural vision using brain computer interfaces is the holy grail of vision prosthetics. However, progress has been slowed by our lack of understanding of how external perturbations, such as electrical stimulation via multi-electrode arrays (MEAs), might perturb the recurrent cortical dynamics and engage the inherent visual representations embedded in the cortical circuitry. Furthermore, investigating these questions directly remains difficult as we rarely have the opportunity to probe the human cortexin-vivo. From this limitation and thanks to the current exponential increase in computing capabilities, modeling and simulation tools naturally came to complement experimental studies.
Methods
We present here a model of intracortical microstimulation (ICMS) applied to a model of columnar primary visual cortex (V1) [1]. The V1 model, built from point neuron models, contains functional retinotopy and orientation maps which are both essential for studying the interaction between external drives such as ICMS and structured spontaneous dynamics. The ICMS is modeled through a phenomenological representation of a MEA that, when activated, causes ectopic spikes in the surrounding cells. The model reproduces two key features of ICMS: sparse and distributed recruitment of neurons, and ectopic spike induction in activated neurons.
Results
We demonstrate that our model reproduces the stereotypical dynamics in V1 seen as a response to ICMS: a transient excitation followed by a lasting inhibition. Comparing the population activity induced by ICMS to the one induced as a response to drifting gratings, we show that ICMS targeting specific orientation columns moderately biases the population activity toward a representation of this orientation. Activating multiple electrodes leads to a slight increase in that orientation bias and produces non-linear activation that could not be predicted by simply adding single-electrode effects. Finally, training a decoder model on responses of the model to natural images, we are also able to show what activity induced by ICMS looks like.
Discussion
Current visual prosthetics rely on phosphene-based encoding through intracortical microstimulation, but this approach underutilizes the complex dynamics of the visual cortex. By investigating how ICMS-induced activity in V1 relates to natural visual activity, we show that current ICMS methods are unlikely to produce anything other than phosphenes and that the non-linear spatio-temporal integrative properties of V1 could be leveraged to enhance visual prosthetic outcomes beyond the resolution limitations of current multi-electrode arrays. The computational framework we developed also enables systematic exploration of stimulation parameters without invasive procedures, such as the development of closed-loop stimulation protocols.



Acknowledgements
The publication was supported by ERDF-Project Brain dynamics, No. CZ.02.01.01/00/22_008/0004643.
References
1. https://doi.org/10.1371/journal.pcbi.1012342
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P061: A Multi-Scale Virtual Mouse Brain for Investigating Cerebellar-Related Ataxic Alterations
Sunday July 6, 2025 17:20 - 19:20 CEST
P061 A Multi-Scale Virtual Mouse Brain for Investigating Cerebellar-Related Ataxic Alterations


Marialaura De Grazia1∗, Elen Bergamo1, Dimitri Rodarie1, Alberto A. Vergani1, Egidio D’Angelo1,2, Claudia Casellato1

1Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
2Digital Neuroscience Center IRCCS Mondino Foundation, Pavia, Italy
∗Email:marialaura.degrazia01@universitadipavia.it

Introduction
Ataxias are neurodegenerative disorders commonly associated with cerebellar dysfunction, often resulting from impaired Purkinje cell (PC) function and progressive loss. In this project, we employed a spiking neural network (SNN) of a mouse olivocerebellar microcircuit, in which we incorporated key PC ataxia- related alterations. We investigated the effect of reduced dendritic arborization, cell shrinkage, and cell loss. These modifications lead to abnormal dynamics within the deep cerebellar nuclei (DCN), which project to the cerebral cortex. Our aim is to create a multiscale framework that integrates cerebellar SNNs with a virtual mouse brain model to investigate the effects of ataxic alterations on whole-brain dynamics (Fig.1A).

Methods
We built a virtual mouse brain network [2] using Allen Mouse Connectome [3] to link neural mass models (a Wong-Wang two-population model per node) on The Virtual Brain (TVB) platform (Fig.1B). Network parameters were tuned employing resting-state fMRI of 20 mice [4]. We are testing a TVB co-simulation framework [5] by replacing each cerebellar node with a cerebellar SNN. For cerebellar network reconstruction and simulation, we used the Brain Scaffold Builder (BSB) [1], which integrates the NEST simulator (Fig.1C). After validating healthy dynamics, we introduced ataxia-related alterations (reduced PC dendritic complexity, shrinkage, and density) and tested various stimulation protocols (e.g. Poisson inputs to mossy fibers from 4 to 100 Hz).


Results
The results indicate that as PC density, dendritic complexity index (DCI) and size decrease, the DCN become increasingly disinhibited due to reduced inhibitory input from PCs. The mildest network dysfunction occurs with DCI reduction alone, while more pronounced changes emerge when PCs also shrink. However, the most substantial disruptions in cerebellar dynamics arise with progressive PC density reduction (Fig.1D).Additionally, TVB global coupling and Wong-Wang model parameters were optimized for each resting-state network to maximize the match between experimental and simulated functional connectivity matrices. TVB simulations are in progress.


Discussion
Next steps will consist in further investigation of the dynamics of the ataxic cerebellar SNN, with a particular focus on exploring the electrophysiological changes within the PC model.Moreover, we are testing a TVB-NEST co-simulation framework and tuning the proxy nodes, the interface nodes between the two simulators, that enable the bidirectional conversion between spike-based and rate-coded information. This multiscale model will enhance our ability to predict and analyze alterations in large-scale brain activity and functional networks under ataxic conditions. Furthermore, it may serve as a computational tool for evaluating neuromodulation protocols (e.g. Transcranial Magnetic Stimulation) for treating cerebellar ataxias.





Figure 1. Figure 1: A. Multiscale framework for cerebellar SNN-neural mass interaction. B. TVB integrates Mouse Connectome with Wong-Wang models. C. Cerebellar network built by BSB maps SNN placement and connectivity. D. Simulating ataxia: reduced PC DCI affects granule cells to PC (via pf: parallel fibers) connectivity, PC loss impacts PC-DCNp connectivity. Testing: mossy fibers input vs. DCNp firing rate.
Acknowledgements

PRIN project 20228B2HN5 “cerebellar NEuromodulation in ATaxia: digital cerebellar twin to predict the MOVEment rescue (NEAT-MOVE)” (CUP master: F53D23005950006310)
References

1.https://doi.org/10.1038/s42003-022-04213-y
2.https://doi.org/10.1523/ENEURO.0111-17.2017

3. doi:10.1038/nature13186
4.https://doi.org/10.1038/s41467-021-26131-z
5.https://doi.org/10.1016/j.neuroimage.2022.118973


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P062: A complete computational model to explore human sound localization
Sunday July 6, 2025 17:20 - 19:20 CEST
P062 A complete computational model to explore human sound localization

Francesco De Santis1, Paolo Marzolo1,AlessandraPedrocchi1,AlbertoAntonietti1
1 Department of Electronics, InformationandBioengineering,Politecnicodi Milano, Milano, Italy
* Email:francesco.desantis@polimi.it
Introduction

Animal ability to localize sounds in space is one of the most studied aspects of hearing.Sound source position is derived from interaural time difference (ITD), interaural level difference (ILD), and spectral cues.Despite decades of auditory neuroscience research, critical questionsremainabout the neural processes supportinghuman sound localization. Its understanding isparticularlyacute for cochlear implant users, whose devices oftenfail toprovide precise spatialperception.Our aimis to address these interrogatives through the implementation of a comprehensive spiking neural network.

Methods
Themodel(depicted in Fig. 1)is composed of a peripheral section, from the sound to the spiking output of the cochlea, and a neuralsection,from the auditory nerve fibers to the superior olivary complexnuclei,developedusingBrian2Hears[1]andNEST[2]neuralsimulatorrespectively.The main inputs to the network are sounds used inin-vivoexperiments in mammals, such as pure tones at different frequencies, clicks, and white noises.To evaluate how source positionimpactedthe overall model activity, we provided stimuli of 1 s duration from different spatial positions in the frontal azimuth plane,analyzing the corresponding spike distribution and overall firing rate of all thein-silicopopulations involved.Special attention was given to the activity of the lateral and medial superior olives(LSO and MSO), two nuclei of the superior olivary complex considered to be the main players in the processing of ILDsand ITDs.
Results
The wide range of our model offered the possibility of facing various validation sites, comparing in-silico activity with different results obtained experimentallyin-vivoorin-vitro. First, allneuralpopulations showed a phase-locked spikingactivity,witha refinementforhigher-level populationsfundamental for correct ITD processing[3]. The analysis of the overall population firing rate of LSO and MSO also showed physiological plausibility, with respectively an ipsilateral-increasing and a contralateral-increasing sigmoid-like behavior in response to shifting azimuth location[3,4]. Finally, the reproduction of specific experimental setups focused on the MSO processing of ITDs showed coherent results in the effect of inhibitory input blockage[5]and in input delay manipulation on the overall MSO activity[6].
Discussion
Theimplementedcomputationalmodeladdressessome of the theoriesconcerningthe processing of sound and thecomputationofitslocationatthebrainstemlevelinhumans.Webelievethatourmodelcouldbe apromisingvalidationbase forstudyingtheeffectofcochlearimplant-generatedartificialinputs for soundlocalization,sheddinglight on thedifferentresponseof theinvolvedauditoryneuronswithrespectto arealsoundstimulation.




Figure 1. End-to-end spiking neural network
Acknowledgements
The work of AA, AP, and FDS in this research is supported by EBRAINS-Italy (European BrainReseArchINfrastructureS-Italy), granted by the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union –NextGenerationEU(Project IR0000011, CUP B51E22000150006, EBRAINS-Italy).
References

https://doi.org/10.3389/fninf.2011.00009



https://doi.org/10.4249/scholarpedia.1430



https://doi.org/10.1002/cphy.c180036



https://doi.org/10.1152/physrev.00026.2009



https://doi.org/10.1038/ncomms4790



https://doi.org/10.1523/JNEUROSCI.1660-08.2008


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P063: Dichotomous Dynamics of Hippocampal and Lateral Septum Oscillations: State-Dependent Topology and Directional Causality During Sleep and Wakefulness
Sunday July 6, 2025 17:20 - 19:20 CEST
P063 Dichotomous Dynamics of Hippocampal and Lateral Septum Oscillations: State-Dependent Topology and Directional Causality During Sleep and Wakefulness

Amir Khani1,Nima Dehghani1,2*

1 N3HUB Initiative, Massachusetts Institute for Technology, Cambridge, U.S.A.
2McGovern Institute for Brain Research, Massachusetts Institute for Technology, Cambridge, U.S.A.


*Email: nima.dehghani@mit.edu

Introduction

Sharp-wave ripples (SWRs) in the hippocampus (HPC) and high-frequency oscillations (HFOs) in the lateral septum (LS) play critical roles in memory consolidation and information routing to subcortical areas. However, the precise spatiotemporal dynamics and causal relationships between these oscillations remain poorly understood. Using multiple analytical approaches, we explored the coordination of HPC-SWR and LS-HFO oscillations during Non-Rapid Eye Movement (NREM) sleep and wakefulness, focusing on their topological features, causal relationships, and dimensional properties.
Methods
We analyzed publicly available LFP recordings from hippocampal subfields and the lateral septum in freely behaving rats [1]. To identify oscillations, we detected ripples following the methods described in [1]. To assess temporal coordination, we employed conditional probability analysis to quantify ripple co-occurrence between regions. To characterize oscillation structure, we applied Topological Data Analysis (TDA) using time-delay embedding (dimension = 3, delay = 2). To determine directional influences, we implemented Convergent Cross Mapping (CCM) for causality assessment [2]. To evaluate the dimensionality of neural activity, we utilized Principal Component Analysis (PCA) across individual channels, regions, and brain states [3].
Results
HPC ripples consistently preceded LS ripples, with the conditional probability of LS ripples given HPC-SWR, P(LS|HPC), higher than the probability of HPC-SWR given LS ripples, P(HPC|LS), especially during NREM sleep (Fig.1E). TDA revealed distinct topological structures: LS HFOs showed state-dependent complexity differences between sleep and awake, while HPC ripples maintained similar features across states (Fig.1D). Bidirectional causality analysis showed LS-HFOs influenced HPC-SWRs more than the reverse across both states, with a stronger relationship during NREM sleep (Fig.1C). Dimensionality analysis, examining SWR events across epochs/channels and applying PCA, highlighted the variability and complexity of SWRs in HPC compared to more uniform LS HFOs (Fig.1A,F).
Discussion
Our findings reveal a complex, bidirectional relationship between HPC and LS during ripple events, with stronger coupling during NREM sleep. The higher intrinsic dimensionality of HPC activity during SWRs reflects its role in complex memory processes, while the lower-dimensional LS activity suggests a streamlined relay function [1]. These results align with prior evidence showing LS neuron activation by hippocampal SWRs [1] and highlight state-dependent coordination between HPC and LS. State-dependent coordination changes suggest that during NREM sleep, the coordination supports memory consolidation, while during wakefulness, it facilitates spatial navigation and behavior.



Figure 1. (A) PCA dimensionality of HPC/LS ripples during NREM sleep (left) and wakefulness (right). (B) Raw LFP traces of HPC-SWR (top) and LS-HFO (bottom). (C) Bidirectional CCM analysis: NREM (top) and wakefulness (bottom). (D) Topological features during NREM: H1 count (left) and Shannon entropy (right). (E) Ripple co-occurrence probability: NREM (left) and wakefulness (right). (F) Channel-wise PCA dime
Acknowledgements
N.D. is supported by NIH Grant R24MH117295.The authors wish to thank NIH for its sponsorship of DANDI archive (DANDI: Distributed Archives for Neurophysiology Data Integration), which provided the open-access data used in this study.
References
[1] Tingley, D., Buzsaki, G. (2020). Routing of hippocampal ripples to subcortical structures via the
lateral septum. Neuron, 105(1), 138-149.e5.
[2] Sugihara, G., et al (2012). Detecting
causality in complex ecosystems. Science, 338(6106), 496–500.
[3] Dehghani, N., et al (2010). Magnetoencephalography
demonstrates multiple asynchronous generators during human sleep spindles. Journal of
Neurophysiology, 104(1), 179-188.
[4] Tingley, D., Buzsaki, G. (2018). Transformation of a spatial map across the hippocampal-lateral septal
circuit. Neuron, 98(6), 1229-1242.e5.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P064: Anatomically and Functionally Constrained Bio-Inspired Recurrent Neural Networks Outperform Traditional RNN Models
Sunday July 6, 2025 17:20 - 19:20 CEST
P064 Anatomically and Functionally Constrained Bio-Inspired Recurrent Neural Networks Outperform Traditional RNN Models

Mo Shakiba1,2, Rana Rokni1,2, Mohammad Mohammadi1,2,Nima Dehghani2,3*

1Neuromatch Academy, Neuromatch, Inc.
2N3HUB Initiative, Massachusetts Institute for Technology, Cambridge, U.S.A.
3McGovern Institute for Brain Research, Massachusetts Institute for Technology, Cambridge, U.S.A.

*Email: nima.dehghani@mit.edu


Introduction
Understanding how neural circuits drive sensory processing and decision-making is a central neuroscience challenge. Traditional Recurrent Neural Networks (RNNs) capture temporal dynamics but fail to represent the structured synaptic architecture seen in biological systems [1]. Recent spatially embedded RNNs (seRNNs) add spatial constraints for better biological relevance [2], yet they do not fully exploit detailed anatomical and functional data to enhance task performance and neural alignment.
Methods
We introduce a bio-inspired RNN that integrates detailed anatomical connectivity and two-photon calcium imaging data from the MICrONS dataset (https://www.microns-explorer.org/cortical-mm3), which offers nanometer-scale reconstructions and functional recordings from mouse visual cortex. Using neuronal positions, synaptic connections, functional correlations, and Spike Time Tiling Coefficients (STTC) [3]—a robust metric that eliminates firing rate biases—we constrain our model with biologically informed weight initialization, communicability calculations, and a regularizer that penalizes long-distance connections while boosting communicability to promote realistic network properties.
Results

Trained on three distinct decision-making tasks—a 1-step inference task, a Go/No-Go task, and a perceptual decision-making task— our bio-inspired RNN demonstrated significant performance improvements over baseline models across 30 simulations per model (900 total simulations across all model variants). Variants combined W* (biologically initialized weights) or W (standard initialization), D* (actual neuron distances) or D (random distances), and C (communicability calculation). Specifically, the anatomically and functionally constrained model (W*D*C) achieved the highest average accuracy across all tasks: 89.4% on the 1-step inference task, 96.9% on the Go/No-Go task, and 86.7% on the perceptual decision-making task.
Moreover, the biologically constrained model demonstrated superior performance across other evaluation metrics, including validation accuracy, training and validation loss, and network properties such as modularity and small-worldness. Specifically, the average modularity of the W*D*C and WD*C models was highest across all tasks, with values of 0.583 (1-Step Inference), 0.558 (Go/NoGo), and 0.594 (Perceptual Decision Making). Similarly, the average small-worldness was also the highest across two tasks, with values of 3.513 (Go/NoGo), and 4.325 (Perceptual Decision Making) (Fig. 1c-e)
Discussion

Our findings demonstrate that incorporating biological constraints into RNNs significantly boosts both task performance and the emergence of realistic network properties, mirroring actual neural architectures. Future work should extend this approach to visual processing tasks, explore other architectures such as LSTMs and GNNs, and integrate additional biological constraints.




Figure 1. (a) Weight initialization matrix (top left) from MICrONS data, combining functional correlation (bottom left) and STTC (bottom right) with log-normal noise. Anatomical distance matrix (top right) shows neuron positioning. (b) Top 10% models (900 simulations): Effects of λ and W on accuracy, loss, modularity, and small-worldness. (c-e) Model variants task performance shows WDC outperforming RNNs.
Acknowledgements
N.D. is supported by NIH Grant R24MH117295. The authors thank NIH for sponsoring DANDI archive, which provided the open-access data used in this study. M.S., R.R., and M.M. thank Neuromatch Academy for its support and resources for young scholars and this study. They also thank the DataJoint team for their help and guidance.
References
● Perich, M. G., & Rajan, K. (2020). Rethinking brain-wide interactions through multi-region ‘network of networks’ models. Current Opinion in Neurobiology, 65, 146–151.
● Achterberg, J., et al. (2023). Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence, 5(12), 1369–1381.
● Cutts, C. S., & Eglen, S. J. (2014). Detecting Pairwise Correlations in Spike Trains: An Objective Comparison of Methods and Application to the Study of Retinal Waves. The Journal of Neuroscience, 34(43), 14288–14303.




Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P065: Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics
Sunday July 6, 2025 17:20 - 19:20 CEST
P065 Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics

Michael Deistler*1,2, Kyra L. Kadhim2,3, Matthijs Pals1,2, Jonas Beck2,3, Ziwei Huang2,3, Manuel Gloeckler1,2, Janne K. Lappalainen1,2, Cornelius Schröder1,2, Philipp Berens2,3, Pedro J. Gonçalves1,2,4,5, Jakob H. Macke*1,2,6

1Machine Learning in Science, University of Tübingen, Germany
2Tübingen AI Center, Tübingen, Germany
3Hertie Institute for AI in Brain Health, University of Tübingen, Tübingen, Germany
4VIB-Neuroelectronics Research Flanders (NERF)
5imec, Belgium
6Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany

*Email:michael.deistler@uni-tuebingen.de, jakob.macke@uni-tuebingen.de
Introduction

Biophysical neuron models provide mechanistic insight underlying empirically observed phenomena. However, optimizing the parameters of biophysical simulations is notoriously difficult, preventing the fitting of these models to physiologically meaningful tasks or datasets. Indeed, current fitting methods for biophysical models are typically limited to a few dozen parameters [1]. At the same time, backpropagation of error (backprop) has enabled deep neural networks to scale to millions of parameters and large datasets. Unfortunately, no current toolbox for biophysical simulation can perform backprop [2], limiting any study of whether backprop could also be used to construct and train large-scale biophysical neuron models.


Methods
We built a new simulation toolbox, Jaxley, which overcomes previous limitations in constructing and fitting biophysical models. Jaxley implements numerical solvers required for biophysical simulations in the machine learning library JAX. Thanks to this, Jaxley can simulate biophysical neuron models and it can compute the gradient of such simulations with backpropagation of error (Fig. 1a). This makes it possible to optimize thousands of parameters of biophysical models with gradient descent. In addition, Jaxley can parallelize simulations on GPUs, which speeds up simulation by at least two orders of magnitude (Fig. 1b).


Results
We applied Jaxley to a range of datasets and models. First, we applied Jaxley to a series of single neuron tasks and found that it outperforms gradient-free optimization methods (Fig. 1c). Next, we built a simplified biophysical model of the retina (Fig. 1d). We optimized synaptic and channel conductances on dendritic calcium recordings and found that the trained model exhibits compartmentalized responses (matching experimental recordings [3]). Third, we built a recurrent neural network model with biophysically-detailed neurons and trained this network on working memory tasks. Finally, we trained a network of morphologically detailed neurons to solve MNIST with 100k biophysical parameters (Fig. 1e).


Discussion
Optimizing parameters of biophysically detailed models is challenging, and previous (gradient-free) methods have been limited to a few dozen parameters. We developed Jaxley, which overcomes these limitations. Jaxley implements numerical solvers required for biophysical simulations [4], it can easily parallelize simulations on GPUs, and it can perform backprop. Together, these features make it possible to construct and optimize large neural systems with thousands of parameters. We designed Jaxley to be easy to use and we provide extensive documentation, which will make it easy for the community to adopt the toolbox. Jaxley bridges systems neuroscience and biophysics and will enable new insights and opportunities for multiscale neuroscience.





Figure 1. (a) Jaxley can compute gradients with backprop. (b) Jaxley is as accurate as the NEURON simulator and can achieve speed-ups with GPU parallelization. (c) Jaxley can identify single-neuron models, sometimes much more efficient than a genetic algorithm. (d) Biophysical model of mouse retina predicts dendritic calcium response. (e) Biophysical network solves MNIST computer vision task.
Acknowledgements
This work was supported by the German Research Foundation (DFG) through Germany’s Excellence Strategy (EXC 2064 – PN 390727645) and the CRC 1233 "Robust Vision", the German Federal Ministry of Education and Research (FKZ: 01IS18039A), the 'Certification and Foundations of Safe Machine Learning Systems in Healthcare' project, and the European Union (ref. 101089288, ref. 101039115).

References
[1] Van Geit, W., De Schutter, E., & Achard, P. (2008). Automated neuron model optimization techniques: a review.Biological cybernetics,99, 241-251.
[2]Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment.Neural computation,9(6), 1179-1209
[3]Ran, Y., Huang, Z., Baden, T., Schubert, T., Baayen, H., Berens, P., ... & Euler, T. (2020). Type-specific dendritic integration in mouse retinal ganglion cells.Nature communications,11(1), 2101.
[4]Hines, M. (1984). Efficient computation of branched nerve equations.International journal of bio-medical computing,15(1), 69-76.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P066: In-silico study on the dynamical and topological relationships between three-dimensional cultures and relative slices
Sunday July 6, 2025 17:20 - 19:20 CEST
P066 In-silico study on the dynamical and topological relationships between three-dimensional cultures and relative slices

Leonardo Della Mea1*, Angelo Piga2,3, Jordi Soriano3

1DIBRIS, University of Genoa, Genoa, Italy
2Department of Economics and Managements, University of Pisa, Pisa, Italy
3Institute of Complex Systems, University of Barcelona, Barcelona, Spain

*Email: leonardo.dellamea@edu.unige.it

Introduction

The in-vitro three-dimensional neuronal culture represents a pioneering technological advancement in exploring brain function and dysfunction in a more realistic environment[1], [2]. The recording of the entire network activity remains a challenge, since researches resort to methods developed for two-dimensional cultures, such as multi-electrode arrays or calcium fluorescence imaging. The question of whether the read-out layer, through which the network is recorded, reliably captures topological and dynamical properties, quickly arises. In the this study we utilized in-silico modelling of developing 3D neuronal networks to assess the reliability of a single layer of the culture in capturing dynamical and topological features of the entire parent network.

Methods
The networks were constructed through the random placement of neurons within a rectangular prism. The cell’s dendritic and axonal domains were constructed upon a main trunk, consisting in concatenated segments, and a group of arborizations, depicted by spherical regions. The overlap of different cells’ dendritic and axonal arbours results in synaptogenesis. The network is then simulated as a pulsed coupled neuronal network embedding the Izhikevich model[3]. The bottom layer of neurons were drawn out to emulate the MEA recordings. Its dynamical properties -focused on the features of the network burst (NB)– and topological traits –based on small-worldness (SW)[4]and modularity (Q)[5]- were compared to the one of the entire parent cultures.

Results and discussion
From a dynamical and topological perspective, statistically significant differences were observed for all the parameters measured.For the network’s slice, the mean NB sizes and dynamical variability are regularly overestimated, whereas the NB duration is underestimated. Due to slicing, a variable fraction of the neurons in the layer are exposed to the propagating front of the burst, thus, the dynamical differences observed in slices may be due the fact that NB events are very unlikely to systematically engage equal fractions of the sub-network, justifying the higher dynamical variability. In addition, the reduced size of the network makes the slices liable to wrongly capture the mean event sizes and duration; indeed, both measures depend on the network size. Modularity exhibited a monotonic decline in both 3D and slice systems, although it was marginally overestimated in the slice. The 3D network shows a bell-shaped trendof SW valuesacross the maturation, peaking in the middle of developmental phase. In contrast, the slice’s values differed, consistently under-estimating it.Sampling the edges from a network whose architecture is grounded on distance-dependent probability of connection, results in sub-networks where this feature is exacerbated. Consequently, in slices, communities are more starkly outlined and in turn Q increases and SW decreases -due to the reduction of shortcuts.








Acknowledgements
The author wish to thank Prof. Jordi Soriano and Angelo Piga , for their kind advice on the experimental procedure and useful discussion. The author declare no use of Artificial Intelligence in this study.
References
[1] https://doi: 10.1016/j.isci.2020.101434.
[2] https://doi: 10.1002/term.2508.
[3] https://doi: 10.1109/TNN.2003.820440.
[4] https://doi: 10.1016/j.neuroimage.2009.10.003.

[5]https://doi: 10.1103/PhysRevE.70.066111.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P067: Optimization-Based Insights into Network Configurations of the Songbird Premotor Cortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P067 Optimization-Based Insights into Network Configurations of the Songbird Premotor Cortex

Fatima M. Dia*1, Maher Nouiehed2, Arij Daou1

1Department of Biomedical Engineering, American University of Beirut, Beirut, Lebanon
2Department of Industrial Engineering and Management, American University of Beirut, Beirut, Lebanon

*Email:fmd14@mail.aub.edu

Introduction
Neural circuits in the brain maintain a delicate equilibrium between excitation and inhibition, yet how this balance operates remains unclear[1]. Moreover, neural circuits often exhibit sequences of activity that rely on excitation and inhibition, but the contribution of local networks to their generation is not well understood. This study investigates neural sequence generation within the High Vocal Center (HVC) of the adult zebra finch forebrain. The HVC plays a critical role in the execution of temporally precise courtship songs and is comprised of three neural populations with distinct electrophysiological responses: glutamatergic basal ganglia–projecting (HVCX) and forebrain-projecting (HVCRA) cortical neurons, and GABAergic interneurons (HVCINT)[2]. While the connections between these neuronal classes are known[1,3], how they orchestrate this temporally precise neural sequence remains largely unknown.
Methods
To address this question, we applied optimization techniques and mathematical modeling to describe the relationships among HVCRA, HVCX, and HVCINT neurons and their bursting patterns. Our approach focused on uncovering the underlying cytoarchitecture of the HVC neural network by utilizing biologically realistic constraints. These constraints included the pharmacological nature of synaptic connections, anatomical and intrinsic properties, neuronal population ratios, precise burst timing, and spiking frequency during song motifs[2,4]. The study incorporated both closed and open network configurations to assess their ability to reproduce observed bursting sequences.
Results
Our computational framework successfully predicted the minimalistic synaptic connections required to replicate the observed bursting patterns of the HVC network. The model identified specific network topologies that satisfied experimental constraints while maintaining functional output. Additionally, our findings indicated that certain network configurations necessitate additional nodes to form a fully connected network capable of sustaining stable sequential bursting. These predictions align with previous experimental data and provide novel insights into potential connectivity motifs that could underlie the temporal precision of song production.
Discussion
This study bridges experimental data with computational predictions, offering a framework for understanding how local excitatory and inhibitory interactions within HVC generate precise neural sequences. By identifying minimal network configurations, our model provides a hypothesis regarding the synaptic architecture required for sequence generation. Future work should incorporate in vivo validation of the predicted connectivity patterns using electrophysiological and optogenetic approaches. Our findings contribute to a broader understanding of how premotor circuits coordinate motor behaviors and may have implications for studying sequence generation in other brain regions beyond the songbird HVC.



Acknowledgements
This work was supported by the University Research Board (URB) and the Medical Practice Plan (MPP) grants at the American University of Beirut.
References
● https://doi.org/10.1152/jn.00162.2013
● https://doi.org/10.1038/nature00974
● https://doi.org/10.1038/nature09514
● https://doi.org/10.1152/jn.00952.2006


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P068: Modelling the dynamics of in vitro cultured neural networks by using biophysical neuronal models
Sunday July 6, 2025 17:20 - 19:20 CEST
P068 Modelling the dynamics of in vitro cultured neural networks by using biophysical neuronal models

Marco Fabiani1, Ludovico Iannello3, Fabrizio Tonelli4, Eleonora Crocco4,
Federico Cremisi4, Lucio Calcagnile2, Riccardo Mannella1, Angelo Di Garbo1,2
1Department of Physics, University of Pisa
2Institute of Biophysics - IBF, CNR of Pisa
3Institute of Information Science and Technologies ”Alessandro Faedo” - ISTI,
CNR of Pisa
4Scuola Normale Superiore - SNS, Pisa
Email: m.fabiani5@studenti.unipi.it, angelo.digarbo@ibf.cnr.it
IntroductionIn this contribution we study the dynamical behaviours arising in a biophys-ical inspired neuronal network of excitatory and inhibitory neurons. The setup of the corresponding model was done by using the electro-physiological data recorded on cultured neuronal networks. The recordingsof the local field potential generated by the neurons was carried outby using multielectrode array (MEA) apparatus [2]. In particular, weinvestigated the dynamics emerging in a cultured population of (Mapk/erkinhibition and BMP inhibition, MiBi) neurons of the entorhinal cortex [1].
MethodsThe MEA recordings were obtained from a grid of 64x 64 electrodes coveringan area of 3.8 mm x 3.8 mm of the neuronal culture. The correspondinglocal field potentials were acquired with a sampling frequency of 20 kHz.The spiking times of the cultured neurons were obtained by applying specificalgorithms to the local field potential signals. Then, an artificialbiophysical inspired neural network was built by employing Hodgkin-Huxley-type models for the single neuron.Finally,the parameters describing the computational neural network were chosen byrequiring that the simulation results were qualitatively in agreement with thecorresponding experimental data.
ResultsAccording to the results described in [2, 3] we found that the MiBi culturedneuoronal network is capable of generating bursting activity. Moreoverthe analysis of these data show that the bursting activity is triggered bysome points of the cultured network (center of activity). In addition, thepropagation on the neural culture was characterized by the center of activitytrajectories (CAT). Furthermore, these cultures exhibit neuronalavalanches with power decay. We have shown that the computationalmodel is capable of reproducing the bursting dynamics observed in vivo cul-tured neural network by choosing suitable parameter values in an all-to-allcoupled network.By setting up a more detailed network model, obtained by modifying theconnectivity matrix and the density of neurons, we proved that such a neu-ronal network is capable of reproducing many of the experimental data and,qualitatively, their specific features.
DiscussionAlthough the mathematical model has some intrinsic limitations, the corre-sponding numerical results helped us to shed light on some basic mechanismsresponsible for the generation of bursting in the network and this could beused to infer that such processes should be present also in the MiBi culturedneuronal network. It would be interesting to check if improving the qualityof the neuronal model will be sufficient to reproduce others experimental fea-tures that are not captured by the adopted model. This include, for instance,to use more realistic single neuron model, synaptic connectivity and synapticplasticity.



Acknowledgements
The research was in part supported by the Matteo Caleo Foundation, by Scuola Normale
Superiore (FC), by the PRIN AICult grant #2022M95RC7 from the Italian Ministry of
University and Research (MUR) (FC) and by the Tuscany Health Ecosystem - THE grant
from MUR (FC, GA, ADG).
References
[1]Tonelli F. et al. “Dual inhibition of MAPK/ERK and BMP signaling
induces entorhinal-like identity in mouse ESC-derived pallial progeni-
tors.” In: Stem Cell (2025). doi: 10.1016/j.stemcr.2024.12.002.
[2] Ludovico Iannello et al. “Analysis of MEA recordings in cultured neural
networks”. In: (2024), pp. 1–5. doi: 10.1109/COMPENG60905.2024.10741515.
[3] Ludovico Iannello et al. “Criticality in neural cultures: Insights into
memory and connectivity in entorhinal-hippocampal networks”. In:
Chaos, Solitons and Fractals 194 (2025), p. 116184.
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P069: Statistics of spiking neural networks based on counting processes
Sunday July 6, 2025 17:20 - 19:20 CEST
P069 Statistics of spiking neural networks based on counting processes

Anne-Marie Greggs1*, Alexander Dimitrov1

1Department of Mathematics and Statistics, Washington State University, Vancouver, WA

*Email: anne-marie.greggs@wsu.edu
Introduction

Estimating neuronal network activity as point processes is challenging due to the singular nature of events and high signal dimensionality[1]. This project analyzes spiking neural networks (SNNs) using counting process statistics, which are equivalent integral representations of point processes[2]. A small SNN of Leaky Integrate-and-Fire (LIF) neurons is simulated, and spiking events are counted as a vector counting process N(t). The Poisson counting process has known dynamic statistics over time: both mean(t) and variance(t) are proportional to time (= r_i*t for each independent source with rate r_i). By standardizing the data, mean dynamics and heteroscedasticity can be removed, allowing comparison to a baseline Poisson counting process.
Methods
Using Brian2[3], an SNN with LIF neurons and Poisson inputs is simulated. Independent and correlated Poisson processes are modeled, generating spike trains for analysis. The counting process, a stochastic process producing the number of events within a time period, is analyzed using the vector counting process. Mean and covariance of spiking events are estimated for both SNN and Poisson processes, facilitating comparison of statistical properties after standardization by subtracting the mean and scaling by the standard deviation to account for temporal dependencies.
Results
Fig 1 shows the simulated spiking dynamics of two neurons over time. The standardized counts indicate variability aligned with Poisson statistical properties. While mean counts show a consistent trend, variance reflects the stochastic nature of neural activity. The centered plot's standard deviation equals the square root of rate and time. The standardized plot’s standard deviation equals 1, serving as a comparison template, starting at 200 milliseconds to avoid biases when rescaling initial small counts.
The covariance matrix quantifies relationships between neurons at certain time and activity levels. Comparing the SNN to modeled Poisson processes reveals notable differences in covariance structures, with the SNN demonstrating greater inter-unit correlation.

Discussion
This study establishes a framework for analyzing the statistical properties of neural network activity, enabling researchers to gain insights into the dynamics of spiking networks. Understanding these aspects is crucial for examining how neural networks respond to stimuli and adapt to changing environments.
The findings highlight the importance of inter-unit dependencies in neural data, with the proposed estimators effectively capturing these dynamics. Future research should broaden parameter exploration and apply the estimators to complex models and real-world data, including comparisons between inhomogeneous Poisson processes with time-varying rates, temporal dependencies, and non-Poisson processes of SNNs.




Figure 1. Two 3D plots compare the activity of Neuron 1 and Neuron 2 over time. The left plot shows counts centered based on the theoretical expectations, while the right plot shows counts standardized by both the theoretical expectations and theoretical standard deviations, with multiple lines representing each of 100 samples.
Acknowledgements
N/A
References
● Brown, E. N., Kass, R. E., & Mitra, P. P. (2004). Multiple neural spike train data analysis: state-of-the-art and future challenges. Nature Neuroscience, 7(5), 456-461. doi: 10.1038/nn1254
● Cox, D. R., & Isham, V. (1980). Point processes. Chapman and Hall.
● Stimberg M, Brette R, Goodman DFM (2019). Brian 2, an intuitive and efficient neural simulator. eLife 8:e47314. doi: 10.7554/eLife.47314



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P070: Neurospheroids as building blocks for 3D brain-like structures
Sunday July 6, 2025 17:20 - 19:20 CEST
P070 Neurospheroids as building blocks for 3D brain-like structures

Ilaria Donati della Lunga*,1, Francesca Callegari1, Fabio Poggio1, Letizia Cerutti1, 2, Mattia Pesce2, Giovanni Lobello1, Alessandro Simi3, Mariateresa Tedesco1, Paolo Massobrio1,4, Martina Brofiga1,2,5
 
1Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genova, Genova, Italy
2Neurofacility, Istituto Italiano di Tecnologia (IIT), Genova, Italy
3 Central RNA Laboratory, Istituto Italiano di Tecnologia (IIT), Genova, Italy
4National Institute for Nuclear Physics (INFN), Genova, Italy
5ScreenNeuroPharm, Sanremo, Italy


* Email:ilaria.donatidellalunga@edu.unige.it
Introduction.Conventionalin vitroneuronal networkshave provided insights into brain function and disease mechanisms [1] but often neglect importantin vivoproperties such as three-dimensionality (3D) and heterogeneity, i.e. the coexistence of different neuronal types. To address these limitations, we aimed to develop 3D cortical (C) and hippocampal (H) cell aggregates known as neurospheroids (NSs): their coupling allowed us to generate homogeneous (CC, HH) or heterogeneous (CH) assembloids (ASs). This study aims to prove that these models enhance the reproducibility, viability, and biological significance of neuronal cultures, while exhibitingin vivo-like electrophysiological patterns, characterized by brain waves.



Methods.We employed a multi-modal approach: structural and mechanical properties were assessed via immunostaining and atomic force microscopy, functional activity was evaluated by calcium imaging and electrophysiological recordings with Micro-Electrode Arrays. To detect neuronal activity and synchronization, fluorescence traces were analyzed using Schmitt trigger and SPIKE-synchronization algorithms. From the electrophysiological activity, we identified the typical observedin vivobrain waves. Spectral analysis was performed using the wavelet transform to assess oscillatory patterns, while the functional E/I ratio metrics [2] was used to validate the physiological relevance of the models.

Results.Morphological analysis revealed a faster geometric expansion and higher cell proliferation in H than C. Stiffness values matchedin vivoconditions [3], and immunostaining confirmed physiological composition and organization [4]. We developed homogeneous (CC, HH) and heterogeneous (CH) ASs coupling pairs of NSs, ensuring physical connections while preserving structural segregation.Calcium dynamic revealed functional intra- and inter-module communication in ASs. Moreover, spectral analysis showed the generation of typical brain waves in our 3D models,with CH displaying different dynamics at DIV 18, marking a transition phase. The excitation/inhibition ratio matched physiological conditions [2].

Discussion.Our findings showed that the developed NSs and ASs enhance physiological relevance by replicating key aspects of brain organization and function. The integration of cortical and hippocampal regions within ASs enables the study of modular and heterogeneous network dynamics. Functional analyses confirm the emergence of complex oscillatory patterns, reflectingin vivo-like network behavior. The ability of ASs to maintain structural segregation while ensuring functional connection makes them a valuable tool for investigating fundamental neurobiological mechanisms, modelling neurodegenerative diseases and testing therapeutic interventions.





Acknowledgements
This work was supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS(PE0000006) - A Multiscale integrated approach to the study of nervous system in health and disease (DN. 1553 11.10.2022).
References
1.https://doi.org/10.1152/jn.00575.2016
2.https://doi.org/10.1038/s41598-020-65500-4
3.https://doi.org/10.1007/s11831-019-09352-w

4.https://doi.org/10.1016/j.cmet.2011.08.016
Speakers
avatar for Paolo Massobrio

Paolo Massobrio

Associate Professor, Univeristy of Genova
My research activities are in the field of the neuroengineering and computational neuroscience, including both experimental and theoretical aspects. Currently, I am coordinating a research group (1 assistant professor, 2 post-docs, and 5 PhD students) working on the interplay between... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P071: The Role of Descending Inferior Colliculus Projections to the Cochlear Nucleus in the Hyperactivity Underlying Tinnitus
Sunday July 6, 2025 17:20 - 19:20 CEST
P071 The Role of Descending Inferior Colliculus Projections to the Cochlear Nucleus in the Hyperactivity Underlying Tinnitus

Katherine Doxey*1, Timothy Balmer2, Sharon Crook1



1School of Mathematical and Statistical Sciences, Arizona State University, Tempe, United States
2School of Life Sciences, Arizona State University, Tempe, United States


*Email: kedoxey@asu.edu


Introduction

Tinnitus is the perception of a sound without the presence of auditory stimuli. Over 2.5 million veterans are currently receiving disability benefits for a tinnitus diagnosis [1] and the likelihood of veterans screening positive for posttraumatic stress disorder (PTSD) increases with severity of tinnitus [2]. We focus on tinnitus from high-frequency hearing loss that is associated with exposure to loud noise and is perpetuated by neuronal hyperactivity in the dorsal cochlear nucleus (DCN). In this study, we test the hypothesis that descending projections from the inferior colliculus (IC) cause hyperexcitability of frequencies that are no longer encoded by the bottom-up sensory signals after damage to the cochlea [3].

Methods
We implement a network model of central auditory processing that consists of 200 fusiform cells that receive tonotopic excitatory input from 200 spiral ganglion neurons (SGN) and lateral inhibitory input from 200 interneurons. We implement the descending IC projections with 200 cells that provide excitatory input to the fusiform cells. Auditory input is modeled as a depolarization of SGN neurons and hearing loss is modeled as reduced depolarization of SGN neurons at the highest frequency range. Each cell is an Izhikevich model neuron with regular spiking dynamics [4]. We characterize the dynamics of the network by applying a pure tone stimulus and simulating either normal hearing or hearing loss.
Results
Without descending IC projections, we confirm that loss of auditory nerve input at the high frequency range produces aberrant excitation at adjacent frequencies of the tonotopic map, i.e. tinnitus. With descending IC projections, we demonstrate that the signal to noise ratio increases as well as the hyperexcitability of the adjacent frequencies.
Discussion
A significant barrier to the treatment of tinnitus is the lack of knowledge on the source of the hyperexcitability; understanding the interactions between the DCN and IC in the central auditory pathway is essential to the development of physiology-based treatment to target the appropriate circuit elements. Our model shows that the descending IC projections result in hyperexcitability of high frequencies that are not encoded after hearing loss. To better understand these mechanisms, future work will involve extending the DCN model network to include narrowband and wideband inhibitors that contribute to processing pure tone, broadband, and notch noise stimuli.




Acknowledgements
This research is supported by DARPA YFA.
References
1.Annual Benefits Report 2021- Veterans Benefits Administration Reports. https://www.benefits.va.gov/REPORTS/abr/
2.Prewitt, A., Harker, G., Gilbert, T. A., et al. (2021). Mental Health Symptoms Among Veteran VA Users by Tinnitus Severity:A Population-based Survey. Military Medicine, 186(Suppl 1), 167–175.
3.Gerken, G. M. (1996). Central tinnitus and lateral inhibition: An auditory brainstem model. Hearing Research, 97(1), 75–83.
4.Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569–1572.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P072: Partial models for learning in nonstationary environments
Sunday July 6, 2025 17:20 - 19:20 CEST
P072 Partial models for learning in nonstationary environments

Christopher R. Dunne*1and Dhruva V. Raman1

1Department of Informatics, University of Sussex, Falmer, United Kingdom

*Email: C.Dunne@sussex.ac.uk


Introduction
The computational goal of learning is often presented as optimality on a given task, subject to constraints. However, the notion of optimality makes strong assumptions: by definition, changes to the task structure will render an optimal agent suboptimal. This is problematic in ethological settings where an agent lacks the time or data to accurately model a task's latent states.
We present a novel design principle (Hetlearn) for learning in nonstationary environments, inspired by the Drosophila mushroom body. It makes specific predictions on what should be inferred from the environment, as compared to Bayesian inference. We show that Hetlearn outperforms an optimal Bayesian agent and better matches human and macaque behavioural data.
Methods
We consider the task of learning from reward prediction errors (RPEs) in which an animal updates a valence based on RPEs (Fig. 1 E). Critically, the degree to which the RPE changes the valence is modulated by a learning rate. To set an adaptive learning rate, Hetlearn employs parallel sublearners with heterogeneous fixed assumptions about the environment (varied fixed learning rates). Ensemble predictions employ a weighted vote with weights dependent on recent sublearner performance. This allows rapid adaptation to unpredictable environmental changes without explicitly estimating complex latent variables. We compare Hetlearn against an advanced Bayesian agent [1] and to behavioural data from humans and macaques [2, 3, 4].
Results
Hetlearn outcompetes a Bayesian agent [1] on reward learning in nonstationary environments (Fig. 1 A-D). It is also algorithmically simpler; it builds a partial generative model and does not track complex environmental statistics. Nonetheless, it aligns with behavioural data from humans and macaques as well as with previous models [2, 4] (Fig. 1 F-G). This is notable because qualitatively different models (Bayes optimal vs suboptimal) previously provided the best respective fit to these two datasets [2, 3]. As such, Hetlearn offers a unified learning principle for seemingly disparate strategies. Finally, Hetlearn is robust to model misspecification; its parameters can vary by an order of magnitude without performance decline.
Discussion
Hetlearn outcompetes [1] in part because it exploits a bottleneck in the learning process. An optimal learner needs to infer multiple quantities that impact a single bounded parameter: the learning rate. Conversely, Hetlearn tracks the recent performance of parallel learners with heterogeneous learning rates. In effect, it trades optimal performance in a stationary environment for generalisability across environments. This results in superior performance in unpredictably changing environments or those with limited time or data, which are the precise conditions in which animals outperform artificial neural networks. Crucially, Hetlearn generates new, testable predictions on what should be inferred from the environment in these regimes.



Figure 1. (A) Environments with varying statistics. (B, C) Learning rate tracking by Hetlearn and Bayesian agent [1]. (D) Hetlearn has lower mean squared error (MSE) across environments. (E) Reward prediction error (RPE) task. Bayesian agent explicitly tracks complex latent states (volatility and stochasticity) that Hetlearn tracks only implicitly. (F, G) Hetlearn matches human [2] and macaque [3, 4] data.
Acknowledgements
This research was supported by the Leverhulme Doctoral Scholarship programme be.AI – Biomimetic Embodied Artificial Intelligence at the University of Sussex.
References
[1]https://doi.org/10.1038/s41467-021-26731-9
[2]https://doi.org/10.1038/nn1954
[3]https://doi.org/10.1038/nn.3918
[4]https://doi.org/10.1016/j.neuron.2017.03.044
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P073: Simulations using realistic 3D reconstructions of astrocyte endfeet reveal how cell shape alters diffusion in Alzheimer’s disease
Sunday July 6, 2025 17:20 - 19:20 CEST
P073 Simulations using realistic 3D reconstructions of astrocyte endfeet reveal how cell shape alters diffusion in Alzheimer’s disease

Florian Dupeuble*1, Chris Salmon2,5, Hugues Berry1, Keith Murai2, Kaleem Siddiqi3,4,Alexandra L Schober2, J. Benjamin Kacerovsky2,Tabish A. Syed2,5,Rachel Fagen2, Tatiana Tibuleac2Amy Zhou2,Audrey Denizot1

1AIStroSight, INRIA, Université Claude Bernard Lyon 1, Villeurbanne, France
2Research Institute of the McGill University Health Centre, McGill University, Montréal, Canada
3School of Computer Science, McGill University, Montréal, Canada
4MILA - Québec AI Institute, Montreal, Canada
5Centre for Intelligent Machines, School of Computer Science, McGill University, Montreal, Canada





*Email: florian.dupeuble@inria.fr
Introduction

Astrocytes are glial cells involved in numerous brain functions, such as blood flow regulation, toxic waste clearance, or nutrient uptake [1]. They display specialized protrusions, called endfeet, that cover the majority of blood vessels and are suspected to mediate neurovascular coupling.
In Alzheimer’s Disease (AD), astrocytes undergo morphological changes [2]. However, whether endfoot morphology is altered and the functional implications of such ultrastructural changes remain poorly understood to date.

Methods
To study the impact of endfoot shape on astrocyte function, we developed a model of diffusion within high-resolution 3D reconstructions of astrocyte endfeet from WT and AD mice, derived from electron microscopy. 3D manifold tetrahedral endfoot meshes were obtained using Blender and TetWild software. Simulations of calcium diffusion were performed using FEniCS, a finite element methods Python library.
Results
We observe strong differences between the diffusional properties of AD and WT endfeet. While WT endfeet rapidly display a homogeneous calcium concentration, calcium in AD endfeet appears highly compartmentalized. Simulations accounting for the complex ER morphology suggest that they contribute to increased calcium concentration heterogeneity in endfeet, in particular in AD.
Discussion
Our preliminary results suggest that the morphological changes undergone by endfeet in AD impact local diffusion, leading to calcium compartmentalization, which could strongly affect local calcium signaling. Future work will be critical to decipher the functional link between endfoot shape, local calcium signaling, and the neurovascular uncoupling observed in AD [3]. This work provides new insights into the basic mechanisms governing endfoot dysfunction in AD.




Acknowledgements

References
1.https://doi.org/10.1146/annurev-neuro-091922-031205
2.https://doi.org/10.1016/j.coph.2015.09.011
3.https://doi.org/10.1093/brain/awac174

annurev-neuro-091922-031205
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P074: Hybridizing Machine Reinforcement Learning with Neuromimetic Navigation Systems
Sunday July 6, 2025 17:20 - 19:20 CEST
P074 Hybridizing Machine Reinforcement Learning with Neuromimetic Navigation Systems

3Christopher Earl,4Moshe Tannenbaum,1Haroon Anwar,4Hananel Hazan,1,2Samuel Neymotin
1Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
2Department of Psychiatry, NYU School of Medicine, New York, NY, USA.
3Department of Computer Science, University of Massachusetts, Amherst, MA, USA.4Allen Discovery Center, Tufts University, Boston, MA, USA.
Introduction

Animal brains are capable of remembering explored locations and complex pathways to efficiently reach goals. Many neuroscience studies have investigated the cellular and circuit basis of spatial representations in the Hippocampal (HPC) and Entorhinal Cortex (EC); however, mechanisms that enable this complex navigation remain a mystery. In computer sciences, Q-Learning (QL) is a Reinforcement Learning (RL) algorithm that facilitates associations between contexts, actions, and long-term consequences. In this study, we develop a bio-inspired neuronal network model which integrates cell types from the mammalian EC and HPC hybridized with QL to simulate how the brain could learn to navigate new environments.
Methods
We used the BindsNET platform [1] to model Grid Cells (GC), Place Cells (PC), and motor control cells to drive agent actions. Our model is a Spiking Neuronal Network (SNN) with leaky integrate-and-fire (LIF) neurons, and organized to mimic GCs and PCs found in the EC and HPC (Fig 1). Reward-Modulated Spike Time Dependent Plasticity (RM-STDP) [2,3] applied to synapses activating motor control cells facilitates learning. The RM-STDP mechanism receives rewards from a Q-Table, helping the agent associate actions with long-term consequences. The agent is tasked with navigating a maze and learning a path to a goal (Fig 2). Feedback is given only at the goal, requiring the agent to associate actions with long-term outcomes to solve the maze.
Results
Trained models successfully and consistently navigated randomly generated mazes. GC populations encoded distinct physical locations into unique neural encodings, enabling the agent to distinguish between them. This lets the agent remember previously visited areas and associate them with actions. Combined with QL, long-term consequences of actions could also be retained, allowing the model to learn long paths to the goal with sparse environmental feedback.

Certain cells in the reservoir population fired only when the agent was in a specific location of the maze, suggesting these cells naturally developed PC-like characteristics. When GC’s were re-oriented in a new maze, the PC’s would remap, similar to behavior observed in biology [4].
Discussion
We designed an SNN model that mimics the mammalian brain’s spatial representation system, and integrated it with QL to solve a maze task. Our model forms the basis of a functional navigation system by effectively associating actions with long-term consequences. While the QL component is not biologically-plausible, we believe higher order brain areas could provide similar computational capabilities. In future work, we aim to implement QL as a SNN. Results also suggest an explanation for the emergence of PC in the HPC due to upstream GC activity in the EC. Moreover, GC spatial representations are likely generalizable outside of a maze. Future research could utilize our model’s GC-PC architecture to navigate more complex environments.



Figure 1. Fig 1: Diagram of bio-inspired SNN, and its relationship to QL. Bio-inspired SNN generates an action, feedback from the environment is fed into a datastructure called a ‘Q-Table’, and updates from this table modulate RM-STDP synapses in the SNN. Fig 2: Example 5x5 maze environment. Green dot represents start, blue dot the goal, yellow dot the agent position, and red dots the optimal path.
Acknowledgements
Research supported by ARL Cooperative Agreement W911NF-22-2-0139 and ARL/ORAU Fellowships
References
[1]BINDSNet: A machine learning-oriented spiking neural networks library in Python.Front Neuroinform2018 12(2018):89

[2] Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning.Front Comput Neurosci2022 16:1017284

[3] Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning.PLoS One2022 17(5):e0265808
[4]Remapping revisited: how the hippocampus represents different spaces.Nat Rev Neurosci202425(6):428-448


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P075: Sequential dynamical invariants in winnnerless competition neural networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P075 Sequential dynamical invariants in winnnerless competition neural networks

Irene Elices*1, Pablo Varona1
1 Grupo de Neurocomputación Biológica, Dept. de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, 28049, Madrid, Spain.

*Email: irene.elices@uam.es
Introduction

Generating neural sequences is fundamental for behavior and cognition, which require robustness of sequential order and flexibility in its time intervals to adapt effectively. Studying cyclic sequences provides key insights into constraints limiting flexibility and shaping sequence intervals. Previously, we identified such constraints as robust cycle-by-cycle relationships between specific time intervals, i.e., dynamical invariants, in bursting central pattern generators [1]. However, their presence in computational models remains largely unexplored. Here, we examine dynamical invariants in a winnerless competition network model that generates chaotic activity while sustaining robust sequences among active neurons.



Methods
We analyzed sequence interval relationships in a Lotka-Volterra neural network that displays chaotic heteroclinic dynamics from its asymmetric connectivity [2,3]. Variables in these generalized Lotka-Volterra differential equations represent the instantaneous spike rate of the neurons. For analysis, we selected the most active neuron as a cycle reference, detecting sequential events in other neurons using activation thresholds. Cycle-by-cycle intervals were defined as the time intervals between activation and subsequent deactivation events, including those between distinct neurons. Analysis included variability measures, correlation analysis, and PCA to uncover robust relationships between interval timings.

Results
Despite the chaotic dynamics, which can be related to exploration tasks in motor and cognitive activity [2-4], we observed robust dynamical invariants between specific time intervals that added to the activation phase locks in active neurons to provide coordination between cells. The dynamical invariants represent constraints to the variability present in the chaotic activity and can underlie an emergent control mechanism. This is the first time that sequential dynamical invariants are reported in heteroclinic dynamics.

Discussion
The presence of dynamical invariants remains largely unexplored in computational models, with only a few studies addressing simplified circuits, such as minimal CPG circuit building blocks [5]. The main challenge in studying dynamical invariants in computational models is the lack of variability in individual model neurons and in network dynamics. However, a winnerless competition network model generates chaotic spatiotemporal activation patterns, thus overcoming the mentioned variability challenge. Our work analyzes for the first time the presence of dynamical invariants among the activation intervals. Results suggest that these robust cycle-by-cycle relationships are part of the sequence coordination mechanisms of the heteroclinic dynamics.




Acknowledgements
Work funded by PID2021-122347NB-I00, PID2024-155923NB-I00, and CPP2023-010818 (MCIN/AEI and ERDF- “A way of making Europe”).
References
[1]https://doi.org/10.1038/s41598-019-44953-2
[2]https://doi.org/10.1063/1.1498155
[3]https://doi.org/10.1103/PhysRevE.71.061909
[4]https://doi.org/10.1007/s11571-023-09987-3

[5]https://doi.org/10.1016/j.neucom.2024.127378
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P076: A NEST-based framework for the parallel simulation of networks of compartmental models with customizable subcellular dynamics
Sunday July 6, 2025 17:20 - 19:20 CEST
P076 A NEST-based framework for the parallel simulation of networks of compartmental models with customizable subcellular dynamics

Leander Ewert1, Christophe Blaszyck2, Jakob Jordan5, Charl Linssen1,3, Pooja Babu1,3, Abigail Morrison1,2, Willem A.M. Wybo4

1Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
2Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
3Simulation and Data Laboratory Neuroscience, Jülich Supercomputer Centre, Institute for Advanced Simulation, Jülich Research Centre, 52425 Jülich, Germany
4Peter Grünberg Institut (PGI-15), Jülich Research Centre, 52425 Jülich, Germany
5Department of Physiology, University of Bern, Bern, Switzerland

*Email: l.ewert@fz-juelich.de

Introduction

The brain is a massively parallel computer. In the human brain, 86 billion neurons convert synaptic inputs into action potential (AP) output. Moreover, even at the subcellular level, computations proceed in a massively parallel fashion. Approximately 7’000 synapse per neuron are supported by complex signaling networks within dendritic compartments. In itself, these signaling networks can also be understood as nanoscale computers that convert synaptic input, backpropagating APs, and local voltage and concentration signals into weight dynamics that support learning and memory. It is only natural, thus, to use the parallelization and vectorization capabilities of modern supercomputers to simulate the brain in a massively parallel fashion.
Methods
The NEural Simulation Tool (NEST) [1] is the reference with regards to the massively parallel simulation of spiking network models, as it has been optimized to efficiently communicate spikes across MPI processes [2]. Moreover, these capabilities introduce little overhead for the user, as the distribution of neurons across MPI processes is taken care of by NEST itself. However, so far NEST had limited options to simulate subcellular processes as part of the network, essentially forcing users to develop custom C++ codes. We have extended the scope of the NESTML modelling language [3] to support multi-compartment models, with dendrites featuring user-specified dynamical processes (Fig 1A-C).
Results
These user-specified dynamics are compiled into efficient NEST models through a C++ code generator, in such a way that the vectorization capabilities of modern CPUs are optimally leveraged. This allows for a deeper level of parallelization, next to the network parallelization across MPI processes, allowing individual CPUs to integrate up to eight compartments in parallel and decreasing single neuron runtimes accordingly. The compartmental engine furthermore leverages the Hines algorithm [4] to achieve stable and efficient integration of the system as a whole. Together, this results in single-neuron speedups compared to the field-standard NEURON simulator [5] of up to a factor of four to five (Fig 1D).
Discussion
Thus, we enable the simulation of large-scale networks where individual neurons have user-specified dynamical processes, representing (i) voltage-dependent ion channels, (ii) synaptic receptors that may be subject to a-priori arbitrary plasticity processes, or (iii) slow processes describing molecular signaling or ion concentration dynamics. Conducting such simulations has historically been challenging, since simulators specific to this purpose were lacking. With the present work, we facilitate the creation and efficient distributed simulation of such networks, thus supporting the investigation of the role of dendritic processes in network-level computations involving learning and memory.




Figure 1. Figure 1. (A) NESTML-defined subcellular mechanisms (left) are compiled into an efficient NEST model. User-defined dendritic layouts (middle) are then embedded in NEST network simulations (right). (B) NESTML code defining dendritic calcium dynamics induces BAC firing [6] in a two-compartment model [7] (C). (D) Speedup of NEST compared to NEURON (bottom) for two dendritic layouts (left vs right).
Acknowledgements
The authors gratefully acknowledge funding from the HelmHoltz POF IV, Program 2 Topic 3.

References
[1] 10.4249/scholarpedia.1430
[2] 10.3389/fninf.2014.00078
[3] 10.3389/fninf.2018.00050
[4] 10.1017/CBO9780511541612
[5] 10.1016/0020-7101(84)90008-4
[6] 10.1038/18686
[7] 10.48550/arXiv.2311.0607



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P077: An Intrinsic Dimension Estimator for Neural Manifolds
Sunday July 6, 2025 17:20 - 19:20 CEST
P077 An Intrinsic Dimension Estimator for Neural Manifolds

Jacopo Fadanni*1, Rosalba Pacelli2, Alberto Zucchetta2, Pietro Rotondo3, Michele Allegra1,5

1Physics and Astronomy Department, University of Padova, Padova, Italy
2Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova, Italy
3Department of Mathematical, Physical and Computer Sciences, University of Parma, Parma, Italy
4Padova Neuroscience Center, University of Padova, Padova, Italy

*Email:jacopo.fadanni@unipd.it


Introduction
Recent technical breakthroughs have enabled a rapid surge in the number of neurons that can be simultaneously recorded[1,2] calling for the development of robust methods to investigate neural activity at a population level.
In this context, it is becoming increasingly important to characterize the neural activity manifold, the set of configurations visited by the network within the Euclidean space defined by the instantaneous firing rates of all neurons[3]. A key parameter of the manifold geometry is its intrinsic dimension (ID), the number of coordinates needed to describe the manifold. While several studies suggested that the ID may be typically low, contrasting findings have disputed this statement, leading to a wide debate [1,2,3,4].
Methods
In this study we present a variant of the Full Correlation Integral (FCI), an ID estimator that was shown to be particularly robust under undersampling and high dimensionality, improving over the classical Correlation Dimension estimator[5].Our variant overcomes the limitation of standard FCI in the case of curvature effects doing a local estimation of the true IDas the peak in the distribution of local estimates. Crucially, local estimates are restricted to approximately flat neighborhoods, as determined by a suitable local parameter, which allows us to avoid overestimation. Our procedure yields a robust estimator for typically challenging situations encountered with neural manifolds.
Results
We proved the reliability of our metric by testing it in two significantly challenging cases. First, we used it to characterize neural manifolds of RNNs performing simple tasks[6], where strong curvature effects generally lead to overestimates. Second, we used it on a benchmark dataset including non-linearly embedded high-dimensional neural data, where all other methods yield underestimates[7]. In Figure 1 we show a comparison between our method and other available methods for the RNN and for the high-dimensional neural data. Linear methods overestimate the ID in the case of curved manifolds, while nonlinear methods underestimate the ID in the case of high-dimensional manifolds. In both situations, our method performed well.

Discussion
Proposing a robust estimator for the ID, our work adds a relevant tool in the open debate about the dimensionality of neural manifolds.
The intrinsic properties of the FCI estimator make it robust to undersampling and high dimensionality, avoiding the so-called ‘curse of dimensionality’ effects. Our local variant makes it robust also for curved manifolds where the ID and the embedding dimension strongly differ. Limitations of our method arise only in extremely non-uniformly sampled manifolds, where the conditions for the applicability of the FCI are unfulfilled[5].
Our method is an important step forward in the current research on neural manifolds, and it is thus of interest to the computational neuroscience community at large.





Figure 1. Left, an example of network activity projected onto the first 3 PCs. IDFCI = 2.1; IDPA = 7; MLE = 3.6. IDTwoNN = 7.1 Right: comparison between different ID estimators in the case of high-dimensional manifolds linearly embedded[7]. Our method performs well for all the dimensionality
Acknowledgements
This work was supported by PRIN grant 2022HSKLK9, CUP C53D23000740006, “Unveiling the role of low dimensional activity manifolds in biological and artificial neural networks”

References
● https://doi.org/10.1038/s41586-019-1346-5
● https://doi.org/10.1126/science.aav7893
● https://doi.org/10.1016/j.conb.2021.08.002

● https://doi.org/10.1038/s41593-019-0460-x
● https://doi.org/10.1038/s41598-019-53549-9
● https://doi.org/10.1038/s41593-018-0310-2
● https://doi.org/10.1371/journal.pcbi.1008591



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P078: Single neuron representational drift in CA1 can be explained by aligning stable low dimensional manifolds
Sunday July 6, 2025 17:20 - 19:20 CEST
P078 Single neuron representational drift in CA1 can be explained by aligning stable low dimensional manifolds

Elena Faillace*1, Mary Ann Go1, Juan Álvaro Gallego1, Simon Schultz1

1Centre for Neurotechnology and Department of Bioengineering, Imperial College London, UK

*Email: elena.faillace20@imperial.ac.uk

Hippocampal place cells are believed to form a cognitive map that supports spatial navigation. However, their spatial tuning has been observed to ‘remap’, i.e. the representation drifts over time, even in the same environment [1]. This raises the question of how a robust and consistent experience is maintained despite continual remapping at the single cell level. Furthermore, it remains unclear whether this drift is coordinated across neurons, and how tuning curves profiles evolve. Here, we propose a population-level approach to identify a stable representation of environments and provide a framework to predict the activity of remapped tuning curves.


We performed two-photon calcium imaging to record the activity of hundreds of neurons in CA1 of head-fixed mice during a running task (Fig.a,b) [2]. Mice expressing GCaMP6s were habituated to a circular track for 7-9 days, followed by 3 days of recordings. All environments had the same circular structure but differed in the visual cues along the walls. During imaging, mice were exposed to two familiar environments, one novel environment, and one familiar environment with inverted order of the symbols on the walls. Neurons were longitudinally registered across sessions using CaImAn.


We used linear dimensionality reduction techniquesto find session-specific manifolds that spanned the coordinated activity of CA1 cells (Fig.c). Using a combination of PCA and canonical correlation analysis (CCA)[3,4], we were able to align these session-specific manifolds (Fig.d) across days, environments, and even mice, achieving robust decoding of the animal's position along the track (Fig.h). Moreover, using this aligned manifold, we could predict the remapping of single neuron tuning curves (Fig.e,f,g), even for those excluded when computing the alignment procedure.


This work supports the perspective that neural manifolds serve as a stable basis for neural encoding [3,4]. We present a framework in which representational drift, traditionally viewed as unstructured, can be interpreted as a coordinated adaptation at a population level, enabling the prediction of tuning curve profiles for ‘unseen’ neurons. Importantly, we did not need to categorise or select neurons based on their functional classes (e.g., place cells), thereby acknowledging their collective contribution to a preserved manifold space.



Figure 1. (a,b): schematic of experiment set-up and Ca+ imaging. Previously presented in [2]. (c): PCA of the average firing rates from different sessions concatenated. d. PCA space after each recording has been projected to a common PC space (alignment). (e,f): example of tuning curves pre and after alignment and their correlation and L2 norm (g). (h): Same as (d), colour coded by angular position.
Acknowledgements

References
[1]https://doi.org/10.1038/nn.3329
[2]https://doi.org/10.3389/fncel.2021.618658
[3]https://doi.org/10.1038/s41593-019-0555-4
[4]https://doi.org/10.1038/s41586-023-06714-0


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P079: Characterizing optimal communication in the human brain
Sunday July 6, 2025 17:20 - 19:20 CEST
P079 Characterizing optimal communication in the human brain

Kayson Fakhar*1,2,Fatemeh Hadaeghi2, Caio Seguin3, Alessandra Griffa4, Shrey Dixit2,5, Kenza Fliou2,6, Arnaud Messé2, Gorka Zamora-López7,8, Bratislav Misic9, Claus Hilgetag2,10


1MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
2Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of Neuroscience, Germany.
3Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
4Leenaards Memory Center, Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Montpaisible 16, 1011 Lausanne, Switzerland
5Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
6Sorbonne University, Paris, France.
7Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain.
8Department of Information and Communication Technologies, Pompeu Fabra University, Barcelona, Spain.
9McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, Canada.
10Department of Health Sciences, Boston University, Boston, MA, USA.
*Email: kayson.fakhar@mrc-cbu.cam.ac.uk
Introduction

Efficient communication is shown to be a key characteristic of the organization of the human brain(Chen et al., 2013). In fact, it was found to bias the wiring economy of the brain networks in its favour by allocating expensive long-range shortcuts among its hubs(van den Heuvel et al., 2012). However, communication efficiency is often defined through specific signalling models — such as routing along shortest paths, broadcasting via parallel pathways, or diffusive random-walk dynamics — that omit important biological aspects of brain dynamics, including conductance delays, oscillations, and inhibitory interactions(Seguin et al., 2023). As a result, a more general framework is needed to characterize optimal signal transmission within a given brain network and to assess whether actual brain communication is truly efficient.


Methods
Here, we introduce a model-agnostic framework based on multi-site virtual lesions in large-scale neural mass models. Our approach uses a game-theoretical perspective: each brain region seeks to maximize its influence over others, subject to constraints from underlying network structure and local dynamics. This perspective yields a mathematically rigorous definition of optimal communication given any model of local dynamics on any arbitrary network structure. We used a linear, nonlinear, and oscillatory neural mass model and compared the resulting optimal influence patterns with those derived from abstract models of signalling, i.e., routing, navigation, broadcasting, and diffusion.


Results
Our results are as follows: First, we found that the broadcasting regime has the closest resemblance to the optimal communication patterns derived from game theory. Second, although the underlying structural connection weight reliably predicts the efficiency of communication between regions, it fails to capture the true influence of weakly connected hub regions. In other words, hubs harness their rich connectivity to broadcast their signal over multiple pathways when they lack a reliable direct connection to their targets. Further comparisons with functional connectivity (fMRI-based correlations) and cortico-cortical evoked potentials reveal two additional insights: (i) functional connectivity is a poor indicator of actual information exchange; and (ii) brain communication is likely to take place close to optimal levels.


Discussion
Altogether, this work provides a rigorous, versatile framework for characterizing optimal brain communication, identifies the most influential regions in the network, and offers further evidence supporting efficient signalling in the brain.



Acknowledgements

This work is in part funded by the German Research Foundation (DFG)-SFB 936-178316478-A1; TRR169-A2; SPP 2041/GO 2888/2-2 and the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510.
References
Chen, Y., Wang, S., Hilgetag, C. C., & Zhou, C. (2013). Trade-off between multiple constraints enables simultaneous formation of modules and hubs in neural systems.PLoS Comput. Biol.,9(3), e1002937.
Seguin, C., Sporns, O., & Zalesky, A. (2023). Brain network communication: Concepts, models and applications.Nat. Rev. Neurosci.,24(9), 557–574.
van den Heuvel, M. P., Kahn, R. S., Goñi, J., & Sporns, O. (2012). High-cost, high-capacity backbone for global brain communication.Proc. Natl. Acad. Sci. U. S. A.,109(28), 11372–11377.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P080: A Representation Learning approach captures clinical effects of slow subthalamic beta activity drifts
Sunday July 6, 2025 17:20 - 19:20 CEST
P080 A Representation Learning approach captures clinical effects of slow subthalamic beta activity drifts

Salvatore Falciglia*1,2, Laura Caffi1,2,3,4, Claudio Baiata3,4, Chiara Palmisano3,4, Ioannis U. Isaias3,4, Alberto Mazzoni1,2

1The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
2Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
3University Hospital Würzburg and Julius Maximilian University of Würzburg, Würzburg, Germany
4Parkinson Institute Milan, ASST G. Pini-CTO, Milan, Italy

*Email: salvatore.falciglia@santannapisa.it

Introduction

Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a mainstay treatment for drug-resistant Parkinson's disease (PD) [1]. Adaptive DBS (aDBS) dynamically adjusts stimulation according to the beta ([12-30] Hz) power of the STN local field potentials (LFPs) to match the patient's clinical status [2]. Today, aDBS control depends on accurate determination of pathological beta power thresholds [3]. Notably, in the days/months timescale, STN beta power shows irregular temporal drifts affecting the long-term efficacy of the aDBS treatment. Here we aim at characterizing these drifts and their clinical effects with a multimodal study, integrating neural and non-neural data streams.

Methods
We conducted home monitoring of patients with PD, focusing on periods of rest and gait activity. Multimodal data were collected, including STN LFPs from chronically implanted DBS electrodes, wearable inertial sensor recordings, and patient-reported diaries. A low-dimensional feature space was derived by integrating the acquired signals through Representation Learning techniques [4]. Leveraging LAURA, our transformer-based framework for predicting the long-term evolution of subthalamic beta power under aDBS therapy [5], we present a multimodal approach where neural data are paired with kinematic data and labelled according to the patient’s clinical status during the monitored activity.
Results
We observed that STN beta power distributions show large irregular non-linear fluctuations over several days. Consequently, patients spend a significant portion of time in suboptimal stimulation states. A fully informative description of the STN LFPs dynamics is achieved by integrating neural, kinematics, and clinical data into a low-dimensional feature-based representation. Latent patterns of STN activity correlate with clinical outcomes as well as motor and non-motor daily activities, necessitating further explainability within the same low-dimensional space. This might support clinically effective recalibration of aDBS parameters on a daily basis.
Discussion
Our study advances the understanding of slow timescales of pathological activity in PD patients implanted with DBS. We developed a comprehensive deep learning framework that integrates neural data with longitudinal clinical information, enabling a more precise characterization of patient status. This will enable personalized control strategies for stimulation parameters (Fig. 1) and enhance the clinician-in-the-loop paradigm by improving patient status assessment and automating aspects of neuromodulation to prevent suboptimal stimulations due to beta power drifts. Ultimately, this work paves the way for novel long-term neuromodulation strategies with potential applications to neurological disorders beyond PD [6].




Figure 1. Block diagram of aDBS as a closed-loop control system. The control loop operates on two separate timescales. In the short-term, the modulation changes with fluctuations in beta power (solid box). In the long-term, the parameters of the fast aDBS algorithm are updated based on the expected drifts of daily beta distributions combined with the neurologist’s clinical assessments (dashed box).
Acknowledgements
The authors declare that financial support was received for the research. The European Union - Next-Generation EU - NRRP M6C2 - Investment 2.1: projects IMAD23ALM MAD, Fit4MedRob, and BRIEF. Fondazione Pezzoli per la Malattia di Parkinson. The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 424778381 - TRR 295.

References
1.https://doi.org/10.1007/s00221-020-05834-7
2.https://doi.org/10.1088/1741-2552/ac3267
3.https://doi.org/10.3390/bioengineering11100990
4.https://doi.org/10.1109/TPAMI.2013.50
5.https://doi.org/10.1101/2024.11.25.24317759
6.https://doi.org/10.3389/fnhum.2024.1320806




Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P081: Unsupervised Dynamical Learning in Recurrent Neural Networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P081 Unsupervised Dynamical Learning in Recurrent Neural Networks

Luca Falorsi*1,2, Maurizo Mattia2, Cristiano Capone2

1PhD program in Mathematics, Sapienza Univ. of Rome, Piazzale Aldo Moro 5, Rome, Italy
2Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanit\`a, Viale Regina Elena 299, Rome, Italy

*Email: luca.falorsi@gmail.com


Introduction
Humans and other animals rapidly adapt their behavior, indicating that the brain can dynamically reconfigure its internal representations in response to changing contexts. We introduce a framework grounded in predictive coding theory [1] that integrates reservoir computing [2] and latent variable models, in which a recurrent neural network learns to reproduce sequences while structuring a latent state-space without direct contextual labels, unlike standard approaches that rely on explicit context vectors [3]. We achieve this by redefining the readout mechanism of an echo state network (ESN) [2] as a latent variable model that adapts via gain modulation to track and reproduce the ongoing in-context sequence.
Methods
An ESN processes sequence examples from a related set of tasks, extracting high-dimensional, nonlinear temporal features. In the first learning phase, we train an encoder network, acquiring a low-dimensional latent space from reservoir activity elicited by varying inputs. Synaptic weights W are optimized offline to map reservoir responses into the latent space. One simple and effective solution is to use principal component analysis (PCA).
When presented with a novel sequence associated to a new context, the latent projections are linearly recombined using gain variables g. These gain variables represent latent features of the current context, dynamically adapting to minimize the (time-discounted) prediction error.
Results
We evaluate our architecture on datasets of periodic trajectories, including testing its ability to trace triangles with different orientations (Fig. 1). The encoder is trained offline using PCA on three predefined orientations and tested on previously unseen ones. Our results show that the network generalizes well across the task family, accurately reproducing unseen sequences. When presented with a novel sequence, the readout dynamically adapts in-context, adjusting gain parameters to optimally recombine the principal components based on prediction error feedback (nudging phase). After the gain parameters stabilize, feedback is gradually removed, and the network autonomously reproduces the sequence (closed-loop phase).
Discussion
The proposed framework decomposes the readout mechanism in a recurrent neural network into fixed synaptic components shared across a task family and a dynamic component that adapts in response to contextual feedback. During online adaptation, the network behaves as a gain-modulated reservoir, where gain variables adjust in response to prediction errors [4]. This aligns with biological evidence that top-down dendritic inputs modulate neuronal gain, shaping context-dependent responses [5]. Our approach offers insights into motor control, suggesting that gain modulation enables the flexible recombination of movement primitives [6]—akin to muscle synergies, which organize motor behaviors through structured activation patterns [7].



Figure 1. Figure 1: A Trajectory of network output during the dynamical adaptation phase on novel trajectories. B Principal components (PC) of the learned gain parameters g. The architecture infers the underlying latent task geometry, correctly representing the 120° rotation symmetry. C Mean square reconstruction error (MSE) for closed loop phase. Dashed lines represent standard deviation over 10 trials.
Acknowledgements
LF aknowledges support by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing and Sapienza University of Rome (AR12419078A2D6F9).
MM and CC acknowledge support from the Italian National Recovery and Resilience Plan (PNRR), M4C2, funded by the European Union–NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy”)
References
1.https://doi.org/10.1098/rstb.2008.0300

2.https://doi.org/10.1126/science.1091277

3.https://doi.org/10.1103/PhysRevLett.125.088103


4.https://doi.org/10.48550/arXiv.2404.07150

5.https://doi.org/10.1093/cercor/bhh065

6.https://doi.org/10.1038/s41593-018-0276-0

7.https://doi.org/10.1038/nn1010
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P082: Temporal Dynamics of Inter-Spike Intervals in Neural Populations
Sunday July 6, 2025 17:20 - 19:20 CEST
P082 Temporal Dynamics of Inter-Spike Intervals in Neural Populations

Luca Falorsi*1,2, Gianni v. Vinci2, Maurizio Mattia2

1PhD program in Mathematics, Sapienza Univ. of Rome, Piazzale Aldo Moro 5, Rome, Italy
2Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Viale Regina Elena 299, Rome, Italy

*Email: luca.falorsi@gmail.com


Introduction

The study of inter-spike interval (ISI) distributions in neuronal populations plays a crucial role in linking theoretical models with experimental data [1, 2]. As an experimentally accessible measure, ISI distributions provide critical insights into how neurons code and process information [3–5]. However, characterizing these distributions in populations of spiking neurons far from equilibrium remains an open issue. In this work, we develop a population density framework [6–8] to study the joint dynamics of the time from the last spike (τ) and the membrane potential (v) in homogeneous networks of integrate-and-fire neurons.


Methods

We model the network dynamics using a population density approach, where a joint probability distribution describes the fraction of neurons with membrane potential (v) and elapsed time (τ) since their last spike. This distribution evolves according to a two-dimensional Fokker-Planck partial differential equation (PDE), allowing us to systematically analyze how single-neuron ISI distributions change over time, including nonstationary conditions driven by external inputs or network interactions. To further characterize ISI statistics, we derive a hierarchy of one-dimensional PDEs describing the evolution of ISI moments and analytically study first-order perturbations from the stationary state, providing first-order corrections to renewal theory.


Results

As a first step, we analytically solve the relaxation dynamics towards the steady state for an uncoupled population of neurons, obtaining an explicit expression for the time-dependent ISI. We then show, through numerical simulations, that the introduced equation correctly captures the time evolution of the ISI distribution, even when the population significantly deviates from its stationary state, such as in the presence of limit cycles or time-varying external stimuli (Fig. 1). Additionally, by self-consistently incorporating the sampled empirical firing rate, the resulting stochastic Fokker-Planck equation describes finite-size fluctuations. Spiking network simulations show an excellent agreement with the numerical integration of the PDE.


Discussion

We connect our novel population density approach to the Spike Response Model (SRM) [10], demonstrating that marginalizing over v recovers the Refractory Density Method (RDM) [11]. However, the marginal equation remains unclosed, and both SRM and RDM rely on a quasi-renewal approximation based on the stationary ISI distribution.
In conclusion, we developed an analytic framework to characterize ISI distributions in nonstationary regimes. Our approach, validated through simulations, bridges theoretical models with experimental observations. Furthermore, this work paves the way for analytically studying synaptic plasticity mechanisms that depend on the timing of the last spike, such as spike-timing-dependent plasticity.




Figure 1. ISI dynamics in an excitatory limit cycle (same parameters as [9] ). Comparing Spiking Neural Network simulations (SNN) with Fokker-Planck equation (FP) and its stochastic version (SFP). Time is measured in units of the membrane time constant τ_m=20ms. A Phase-dependent ISI distribution. B Trajectory of the firing rate and the first moment of the ISI. C Time averaged ISI distribution.
Acknowledgements
LF aknowledges support by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing and Sapienza University of Rome (AR12419078A2D6F9).


MM and GV acknowledge support from the Italian National Recovery and Resilience Plan (PNRR), M4C2, funded by the European Union–NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy”)


References
1.https://doi.org/10.1016/s0006-3495(64)86768-0
2.https://doi.org/10.2307/3214232
3.https://doi.org/10.1523/JNEUROSCI.13-01-00334.1993
4.https://doi.org/10.1103/PhysRevLett.67.656
5.https://doi.org/10.1523/JNEUROSCI.18-10-03870.1998
6.https://doi.org/10.1162/089976699300016179
7.https://doi.org/10.1162/089976600300015673
8.https://doi.org/10.1103/PhysRevE.66.051917|
9.https://doi.org/10.1103/PhysRevLett.130.097402
10.https://doi.org/10.1103/PhysRevE.51.738
11.https://doi.org/10.1016/j.conb.2019.08.003
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P083: Network Dynamics and Emergence of Synchronisation in A Population of KNDy Neurons
Sunday July 6, 2025 17:20 - 19:20 CEST
P083 Network Dynamics and Emergence of Synchronisation in A Population of KNDy Neurons

Saeed Farjami*1,2, Margaritis Voliotis1,2, Krasimira Tsaneva-Atanasova1,2,3

1Department of Mathematics and Statistics, University of Exeter, Exeter, United Kingdom
2Living Systems Institute, University of Exeter, Exeter, United Kingdom
3EPSRC Hub for Quantitative Modelling in Healthcare, University of Exeter, Exeter, United Kingdom

*Email: s.farjami@exeter.ac.uk


Introduction
Regulation of the reproductive axis critically depends on the gonadotropin-releasing hormone (GnRH) pulse generator. A neuron population in hypothalamic arcuate nucleus co-expressing kisspeptin, neurokinin B and dynorphin (KNDy) plays a key role in generating and maintaining pulsatile GnRH release [1]. While previous research has characterised electrophysiological properties and firing patterns of single KNDy neurons [2], mechanisms governing their network dynamics, particularly the processes underlying synchronisation and burst generation, remain incompletely understood. Recent studies [3,4] have explored how network interactions contribute to the emergence of synchronised activity, but many aspects of the regulatory mechanisms remain elusive.

Methods
We have recently developed a biophysically realistic Hodgkin-Huxley-type model of a single KNDy neuron that incorporates comprehensive electrophysiological properties and calcium dynamics [2]. In this study, we refine this model to better capture experimentally observed features such as the current-frequency response. Building on this, we construct a computational model of a biologically realistic KNDy neuron network, incorporating both fast glutamate-mediated synaptic coupling and slower neuromodulatory interactions via neurokinin B (NKB) and dynorphin (Fig. 1). This fast-slow timescale coupling allows us to investigate the complex interplay between fast and slow synaptic dynamics in regulating network behaviour.
Results
We explore how network structure and neuronal interactions give rise to emergent bursting and synchronisation. Specifically, we assess the impact of connectivity patterns, functional heterogeneity, and glutamate signalling, as well as the distinct roles of NKB and dynorphin in shaping network dynamics. Our results reveal how different signalling pathways contribute to the initiation, maintenance, and termination of both ‘miniature’ and full synchronisation events. In particular, we show how glutamate, acting on a fast timescale, might play a crucial role in triggering synchronisation, whereas slower neuropeptide-mediated interactions via NKB and dynorphin contribute to the propagation and termination of these events.
Discussion
Our findings provide novel insights into the collective behaviour of KNDy neurons, bridging the gap between single-cell dynamics and network-level emergent dynamics. This work,building on previous studies,advances our understanding of how KNDy neuron networks generate and regulate GnRH pulsatile activity. Furthermore, our results offer testable hypotheses for experimental studies, guiding future research using state-of-the-art neurobiological techniques to validate computational predictions. In the long term, understanding KNDy network dynamics could inform the development of treatments for reproductive disorders linked to GnRH pulse generator dysfunction.




Figure 1. Figure 1: A schematic description of a network structure of KNDy neurons and their cell-cell interactions either through glutamate neurotransmitter or neurokinin B (NKB) and dynorphin neuropeptides (A) and feedback mechanisms among these agents (B) giving rise to GnRH pulses in GnRH neurons which in return dictate other hormonal pulsatility.
Acknowledgements
Gratefully, we acknowledge BBSRC for financial support of this study via grants BB/W005883/1 and BB/S019979/1.
References
[1]https://doi.org/10.1210/en.2010-0022.
[2]https://doi.org/10.7554/eLife.96691.4.
[3]https://doi.org/10.1016/j.celrep.2022.111914.
[4]https://doi.org/10.1371/journal.pcbi.1011820.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P084: Gamma Oscillation in the Basal Ganglia: Interplay Between Local Inhibition and Beta Synchronization
Sunday July 6, 2025 17:20 - 19:20 CEST
P084 Gamma Oscillation in the Basal Ganglia: Interplay Between Local Inhibition and Beta Synchronization

Federico Fattorini*1,2, Mahboubeh Ahmadipour1,2, Enrico Cataldo3, Alberto Mazzoni1,2, Nicolò Meneghetti1,2


1The Biorobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
2Department of Excellence for Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy

3Department of Physics, University of Pisa, Pisa, Italy



*Email:federico.fattorini@santannapisa.it
Introduction

Basal ganglia (BG) gamma oscillations (30-100 Hz) have been proposed as valuable biomarkers for guiding adaptive deep brain stimulation in Parkinson’s disease (PD) [1], offering a reliable alternative to beta oscillations (10-30 Hz). However, the origins of gamma oscillations in these structures remain poorly understood. Using a validated spiking network model of the BG [2], we identified striatal and pallidal sources of gamma oscillations. We found that their generation relied on self-inhibitory feedback within these populations and was strongly influenced by interactions with pathological beta oscillations. Our findings provide new insights into the generation of BG gamma oscillations and their role in PD pathology.


Methods
The BG model (Fig. 1A) included approximately 14000 neurons divided into 6 populations: D1 and D2 medium spiny neurons, fast-spiking neurons, prototypic (GPe-TI) and arkypallidal populations of external globus pallidus and subthalamic nucleus. We utilized non-linear integrate-and-fire neurons, using population-specific parameters. The transition from healthy to Parkinsonian conditions was simulated with a dopamine depletion parameter that increased the input to D2. The origins of gamma oscillations were explored by selectively disconnecting model projections and isolating nuclei that exhibited gamma activity. Interactions with pathological beta oscillations were analyzed by studying phase-frequency and phase-amplitude coupling.
Results
We identified two distinct gamma oscillations in our model (Fig. 1B): high-frequency (≈100 Hz) gamma in GPe-TI and slower (≈70 Hz) ones in D2 medium spiny neurons. While GPe-TI gamma oscillations were prominent in healthy and pathological states, D2 oscillations emerged under dopamine-depleted conditions. Both rhythms required self-inhibition within the corresponding nuclei to be generated. However, this mechanism alone could not account for all gamma dynamics. Beta oscillations, generated by the model under pathological conditions, affected GPe-TI gamma frequency via phase-frequency coupling and amplified D2 gamma activity through phase-amplitude coupling. Both interactions were mediated by beta-induced modulation of spiking activity.

Discussion
By employing a computational model of the BG, we offered a comprehensive explanation of gamma rhythmogenesis in these structures, identifying two sources: D2 and GPe-TI. Our results were consistent with experimental findings from both rat [3] and human local field potentials [4] and aligned with the results of other computational models [5]. We also clarified how these rhythms were generated through self-inhibition within these nuclei and how they interacted with pathological beta synchronization. Our insights into the mechanism behind gamma generation in BG represent a crucial step toward advancing our understanding of PD and improving their potential as biomarkers for adaptive deep brain stimulation.





Figure 1. A) Computational model of the basal ganglia: FSN (striatal spiking interneurons), D1/D2 (medium spiny neurons with D1 and D2 dopamine receptors), GPe-TA/TI (arkypallidal/prototypic populations of the globus pallidus externa), and STN (subthalamic nucleus). B) Power spectral densities (PSDs) of GPe-TI (top) and D2 (bottom) activities under healthy and Parkinsonian (PD) conditions.
Acknowledgements
This work was supported by the Italian Ministry of Research, in the context of the project NRRP “Fit4MedRob-Fit for Medical Robotics” Grant (# PNC0000007).
References



1.https://doi.org/10.1038/s41591-024-03196-z

2.https://doi.org/10.1371/journal.pcbi.1010645

3.https://doi.org/10.1111/cns.14241

4.https://doi.org/10.1016/j.expneurol.2012.07.005


5.https://doi.org/10.1523/JNEUROSCI.0419-23.2023

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P085: Biological validation of a computational model of nitric oxide dynamics by emulating the nitric oxide diffusion experiment in the endothelium
Sunday July 6, 2025 17:20 - 19:20 CEST
P085 Biological validation of a computational model of nitric oxide dynamics by emulating the nitric oxide diffusion experiment in the endothelium

Pablo Fernández-López1, Ylermi Cabrera-León1, Patricio García Báez1,2, Scott McElroy3, Salvador Dura-Bernal3andCarmen Paz Suárez-Araujo*1
1Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Parque Científico Tecnológico, Campus Universitario de Tafira, Las Palmas de Gran Canaria, 35017, Canary Islands, Spain.
2Departamento de Ingeniería Informática y de Sistemas, Universidad de La Laguna, Camino San Francisco de Paula, 19, Escuela Superior de Ingeniería y Tecnología, San Cristóbal de La Laguna, 38200, Canary Islands, Spain.
3State University of New York (SUNY) Downstate Health Sciences University, 450 Clarkson Avenue, Brooklyn, NY, USA 11203.



*Email: carmenpaz.suarez@ulpgc.es

Understanding how the brain works, how it is structured and how it computes is one of the goals of computational neuroscience. An essential step in this direction is to understand the cellular communication that enables the transition from nerve cells to cognition.
It is now accepted that the links between neurons are not only established by synaptic connection, but also by the confluence of different cellular signals that affect global brain activity, with the underlying mechanism being the diffusion of neuroactive substances into the extracellular space (ECS). One of these substances is the free radical gas nitric oxide (NO), which, in turn, determines a new type of information transmission: the volume transmission (VT). VT is a non-simple form of short- and long-distance communication that acts not only as a microenvironment to separate nerve cells, but also as an information channel [1, 2]. NO is a signaling molecule that is synthesized in a number of tissues by NO synthases and has the ability to regulate its own production. It is lipid soluble, membrane permeable and has a high diffusivity in both aqueous and lipid environments.
In the absence of definitive experimental data to understand how NO functions as a neuronal signalling molecule, we have developed a computational model of NO diffusion based on non-negative and compartmental dynamical systems and transport phenomena [3].
The proposed model has been validated in the biological environment, specifically in the endothelium. In this work, the biological validation is approached by reproducing the experiment performed by Tadeuzs et al, 1993 [4] on NO diffusion in the aorta. We implement our model with two compartments, using real measurements of NO synthesis and diffusion processes in the endothelial cell and in the smooth muscle cells of the aorta at a distance of 100 ± 2 µm between them. A fitting procedure to the observed NO dynamics was executed, and hypothesis related to the different processes in the NO dynamics were provided.
Our results provide evidence that the compartmental model of NO diffusion has allowed the design of a computational framework [5] to study and determine the dynamics of synthesis, diffusion and self-regulation of NO in the brain and in artificial environments. We have also shown that this model is very powerful because it allows to incorporate all the biological features and existing constraints in NO release and diffusion and in the environment where NO diffusion processes take place.
Finally, it has been shown that our model is an important tool for designing and interpreting biological experiments on the underlying processes of NO dynamics, NO behaviour and its impact on both brain structure and function and artificial neural systems.





Acknowledgements
This work has been funded by the Consejería de Vicepresidencia 1ª y de O. P., Inf., T. y M. del Cabildo de GC under Grant Nº “23/2021”, as well as by the ‘Marie Curie Chair’ under Grant Nº “38/2023”, and ‘Marie Curie Chair’ under Grant Nº “CGC/2024/9655”.The latter was funded by the Consejeria de Vicepresidencia 1ª y de Gobierno de O. P. e Inf., Arq. y V. del Cabildo de GC.
References
[1] https://doi.org/10.1177/107385849700300113.
[2] https://doi.org/10.1016/j.neuroscience.2004.06.077.
[3] https://doi.org/10.1007/978-3-319-26555-1_59.
[4] https://doi.org/10.1006/bbrc.1993.1914.
[5] https://doi.org/10.1063/1.1291268.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P086: Synergistic short-term synaptic plasticity mechanisms for working memory
Sunday July 6, 2025 17:20 - 19:20 CEST
P086 Synergistic short-term synaptic plasticity mechanisms for working memory

Florian Fiebig*1, Nikolaos Chrysanthisdis1, Anders Lansner1,2, Pawel Herman1,3

1KTH Royal Institute of Technology, Dept of Computational Science and Technology, Stockholm, Sweden
2Stockholm University, Department of Mathematics, Stockholm, Sweden
3Digital Futures, KTH Royal Institute of Technology
*Email: fiebig@kth.se




Introduction

Working memory (WM) is essential for almost every cognitive task. The neural and synaptic mechanisms supporting the rapid encoding and maintenance of memories in diverse tasks are the subject of ongoing debate. The traditional view of WM as stationary persistent firing of selective neuronal populations has given room to newer ideas for mechanisms that support a more dynamic maintenance of multiple items, that may also tolerate activity disruption. Computational WM models based on different biologically plausible synaptic and neural plasticity mechanisms have been proposed but not combined systematically. Monolithic models (WM function explained by one particular mechanism) are theoretically appealing but also narrow explanations.
Methods
In this study we evaluate the interactions between three commonly used classes of plasticity: Intrinsic excitability (postsynaptic, increasing the excitability of spiking neurons), synaptic facilitation/augmentation (presynaptic, potentiating outgoing synapses of spiking neurons) and Hebbian plasticity (pre-post-synaptic, potentiating recurrent synapses driven by correlations), see Fig.1. Combinations of these mechanisms are systematically tested in a spiking neural network model on a broad suite of tasks or functional motifs deemed principally important for WM operation, such as one-shot encoding, free and cued recall, delay maintenance and updating. In our evaluation we focus on operational task performance and biological plausibility.
Results
We show that previously proposed short-term plasticity mechanisms may not necessarily be competing explanations, but instead yield interesting functional interactions on a wide set of WM tasks and enhance the biological plausibility of spiking neural network models. Our results indicate that a composite model, combining several commonly proposed plasticity mechanisms for WM function, is superior to more reductionist variants. Importantly, we attribute the observable differences to the principle nature of specific types of plasticity. For example, we find a previously undescribed synergistic function of Hebbian plasticity that supports the rapid updating of multi-item WM sets through rapidly learned inhibition.
Discussion
Our study suggests that commonly used forms of plasticity proposed for the buffering of WM information besides persistent activity are eminently compatible, and yield synergies that improve function and biological plausibility in a modular spiking neural network model. Combinations enable a more holistic model of WM responsive to broader task demands than what can be achieved with more reductionist models. Conversely, the targeted ablation of specific plasticity components reveals that different mechanisms are differentially important to specific aspects of WM function, advancing the search for more capable, robust and flexible models accounting for new experimental evidence of bursty and activity-silent multi-item maintenance.




Figure 1. Fig.1-Plasticity Combinations. The Augmentation plasticity model is implemented using the well-known Tsodyks-Makram mechanism [1]. The Bayesian Confidence Propagation Neural Network (BCPNN) learning rule implements intrinsic plasticity, as well as Hebbian plasticity [2]. These 3 components can be simulated separately or together, yielding 7 scenarios to simulate and study.
Acknowledgements
We would like to thank the Swedish Research Council (VR) grants: 2018-05360 and 2016-05871, Digital Futures and Swedish e-science Research Center (SeRC) for their support.
References
Tsodyks, M., Pawelzik, K., & Markram, H. (1998). Neural Networks with Dynamic Synapses.Neural Computation,10(4), 821–835.
Tully, P. J., Hennig, M. H., & Lansner, A. (2014). Synaptic and nonsynaptic plasticity approximating probabilistic inference.Frontiers in Synaptic Neuroscience,6, 8.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P087: Structured Inhibition and Excitation in HVC: A Biophysical Approach to Song Motor Sequences
Sunday July 6, 2025 17:20 - 19:20 CEST
P087 Structured Inhibition and Excitation in HVC: A Biophysical Approach to Song Motor Sequences

Fatima H. Fneich*1, Joseph Bakarji2,Arij Daou1

1Biomedical Engineering Program, American University of Beirut, Lebanon
2Department of Mechanical Engineering, American University of Beirut, Lebanon

*Email: fhf07@mail.aub.edu

Introduction
Stereotyped neural sequencesoccurin thebrain [1], yet the neurophysiological mechanisms underlying their generation remain unclear. Birdsong is a prominent model to study such behavior, as juvenile songbirds learn from tutors andlaterproduce stereotyped song patterns. The premotor nucleus HVC coordinates motor and auditory activity for learned vocalizations.HVC consists of three neural populations with distinct in vitro and in vivo electrophysiologicalresponses [2,3]. Existing models explain HVC’s networkusingintrinsic circuitry, extrinsic feedback, or both. Here, we develop a physiologically realistic neural network model incorporating the three classes of HVC neurons basedonpharmacologicallyidentifiedion channels and synaptic currents.
Methods
We developed a conductance-based Hodgkin-Huxley-type model of HVC neurons and connected them via biologically realistic synaptic currents. The network was structured as a feedforward chain of microcircuits encoding sub-syllabic song segments, interacting through structured feedback inhibition[4]. Simulations were performed using MATLAB’s ode45 solver, incorporating key ionic currents, including T-type Ca²⁺, Ca²⁺-dependent K⁺, A-type K⁺, and hyperpolarization-activated inward current. Parameters were adjusted to replicate in vivo-like activity.The model reproduces sequential propagation of neural activity, highlighting intrinsic neuronal properties and synaptic interactions essential for song production.
Results
The model reproduced in vivo activity patterns of HVC neuron classes. HVCRA neuronsexhibitedsparse, time-locked bursts, each lasting ~10ms.HVCX neurons generated 1-4 bursts, typically following inhibitory rebound, while HVCINT neurons displayed tonic activity interspersed with bursts.Sequential propagation wasmaintainedthrough structured inhibition and excitation, with synapticconductancetuned to match dual intracellular recordings.The model accurately captured burst timing, spike shapes, and firing dynamicsobservedin experimental recordings, confirming its ability to simulate biologically realistic song-related neural activity.
Discussion
Our model provides a biophysically realistic representation of sequence generation in HVC, emphasizing the role of intrinsic properties and synaptic connectivity. The structured inhibition from HVCINT neurons ensured precise burst timing in HVCRA and HVCX neurons, supporting stable propagation. Key ionic currents, including T-type Ca²⁺ and A-type K⁺, regulated burst initiation and duration. These findings refine existing models by incorporating experimentallyobservedbiophysical details. This work offers new insights into the neural basis of motor sequence learning and could inform studies of other stereotyped behaviors.





Acknowledgements
This work was supported by the University Research Board (URB) and the Medical Practice Plan (MPP) grants at the American University of Beirut.

References
● https://doi.org/10.1038/nature09514
● https://doi.org/10.1038/nature00974
● https://doi.org/10.1152/jn.00162.2013
● https://doi.org/10.7554/eLife.105526.1



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P088: Amyloid-Induced Network Resilience and Collapse in Alzheimer’s Disease: Insights from Computational Modeling
Sunday July 6, 2025 17:20 - 19:20 CEST
P088 Amyloid-Induced Network Resilience and Collapse in Alzheimer’s Disease: Insights from Computational Modeling

Ediline L. F. Nguessap*1, Fernando Fagundes Ferreira1

1Department of Physics, University of São Paulo, Ribeirao Preto, Brazil

*Email: fonela@usp.br

Introduction

Alzheimer’s disease (AD) is characterized by progressive synaptic loss, neuronal dysfunction, and network disintegration due to amyloid-beta accumulation [1,2,3,4,5]. While experimental studies identifyamyloid-induced connectivity changes, the role of network resilience; the ability of the brain to maintain function despite synaptic loss remainspoorlyunderstood. Most computational models of AD either focus on static network properties (graph theory-basedapproaches) or single neuron dynamics[6], neglecting the interplay between progressive structural collapse and functional neuronal activity. Here, we model a small-world neuronal network and investigate its structural resilience and dynamical response to amyloid-driven synapse loss.

Methods
We construct a small-worldneuronalnetwork with synaptic weights evolving under amyloid-induced weakening. We track network resilience using key metrics: Largest Strongly Connected Component (LSCC) as a measure of global connectivity[7][8]. Global Efficiency, Clustering Coefficient, and Shortest Path Length to quantify functional resilience. To study functional neuronal activity, we simulate a network of Izhikevich neurons with synaptic coupling, observing how firing rates and synchronization evolve before, during, and after LSCC collapse. We further refine our model by removing isolated neurons and reducing background input when LSCC collapses, to ensure biological realism.
Results
Our simulations reveal a critical amyloid threshold (~75% synaptic loss) beyond which LSCC rapidly collapses, marking the transition from a functionally connected to a fragmented network. Small-world networks exhibit greater resilience than random ones, with LSCC persisting longer due to local clustering and efficient communication pathways. Global efficiency remains stable early on but drops sharply with LSCC collapse, while clustering initially increases (compensatory rewiring) before declining, indicating widespread disconnection. Neuronal firing desynchronizes post-collapse, aligning with cognitive dysfunction in AD, and removing isolated neurons accelerates activity decline, mimicking cortical atrophy.
Discussion
Our findings suggest that network topology plays a crucial role in Alzheimer’s resilience. As LSCC shrinks past a critical threshold, functional decline accelerates, aligning with AD progression. Neurons remain active but lose synchronization, suggesting that cortical regions stay active in late AD stages but fail to coordinate information transfer. Biologically inspired modifications (removing isolated neurons, reducing background input) enhance realism by preventing unrealistic activity after connectivity loss. This suggests that network vulnerability could serve as an AD biomarker. Future work should explore synaptic plasticity, tau pathology, and patient data (EEG, fMRI) for furtherimprovement.



Acknowledgements
FFF is supported by Brazilian National Council for Scientific and Technological Development (CNPq) 316664/2021-9. ELFN is supported by Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES).
References




https://doi.org/10.1016/j.neurobiolaging.2005.10.017

https://doi.org/10.1212/WNL.00000000000004012

https://doi.org/10.1371/journal.pone.0196402

https://doi.org/10.1089/ars.2023.0010

https://doi.org/10.3389/fnbeh.2014.00106

https://doi.org/10.1097/NEN.0b013e31824f1c1a

https://doi.org/10.1016/j.amc.2021.126372

https://doi.org/10.1038/s41598-019-42977-6

https://doi.org/10.1038/nrn2575



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P089: Parameter identifiability in model-based inference for neurodegenerative diseases: noninvasive stimulation
Sunday July 6, 2025 17:20 - 19:20 CEST
P089 Parameter identifiability in model-based inference for neurodegenerative diseases: noninvasive stimulation

Jan Fousek*¹


¹ Central European Institute of Technology (CEITEC), Masaryk University, Brno, Czech Republic


*Email: jan.fousek@ceitec.muni.cz
Introduction

Tracking the trajectories of progression of patients with neurodegenerative diseases remains a challenging task. While employing connectome-based models can improve the performance of machine-learning-based classification [1], the identifiability of relevant parameters can be challenging when using only data features derived from spontaneous (resting state) data [2]. Here, in the context of Alzheimer disease (AD), we explore an alternative approach based on perturbations, namely using the response to single pulse transcranial magnetic stimulation recorded by EEG.

Methods
First, a whole-brain model using a normative human connectome was set up together with an EEG forward solution in order to replicate TMS evoked potential (TEP) [3] following precuneus stimulation [4]. Next, to define the trajectory of the AD in parameter space, we used previously established trajectory of progression of AD capturing how the evolution of the spatial profile of the proteinopathy expresses is reflected in the altered model parameters [5]. Using simulation-based inference, we then tried to recover the parameters using synthetic data simulated along the AD progression trajectory, and assessed the shrinkage of the posterior distributions, and the precision of the point estimates.
Results
The model successfully reproduced the TEP patterns found in the empirical data. Along the progression trajectory, the model parameters remained identifiable, showing significant shrinkage of the posterior distribution with respect to the prior and small distance of the mean values from the ground-truth. Additionally, while we observed some correlation between the estimated parameters (hinting to a certain degree of degeneracy), it did not impact the performance of the inference.
Discussion
Here we demonstrate that the brain response to the noninvasive stimulation is informative enough to allow effective parameter inference in connectome-based models. The workflow can be easily adapted to different data-features derived from the TEPs, as well as different stimulation targets. As a natural next step, this approach will be benchmarked and validated in empirical datasets on individual subject data.




Acknowledgements
Jan Fousek receives funding from the European Union’s Horizon Europe research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101130827.
References
[1] https://doi.org/10.1002/trc2.12303
[2] https://doi.org/10.1088/2632-2153/ad6230
[3] https://doi.org/10.3389/fninf.2013.00010
[4] https://doi.org/10.1016/j.clinph.2024.09.007
[5] https://doi.org/10.1523/ENEURO.0345-23.2023
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P090: Identifying Manifold Degeneracy and Estimating Confidence for Parameters of Compartmental Neuron Models with Hodgkin-Huxley Type Conductances
Sunday July 6, 2025 17:20 - 19:20 CEST
P090 Identifying Manifold Degeneracy and Estimating Confidence for Parameters of Compartmental Neuron Models with Hodgkin-Huxley Type Conductances

Saina Namazifard1, Anwar Khaddaj2, Matthias Heinkenschloss2,Fabrizio Gabbiani*1

1Department of Neuroscience, Baylor College of Medicine, Houston, USA
2Department of Computation Applied Mathematics & Operations Research, Rice University, Houston, USA
*Email:gabbiani@bcm.edu

Introduction

Much work has been devoted to fitting the biophysical properties of neurons in compartmental models with Hodgkin-Huxley type conductances. Yet, little is known on how reliable model parameters are, and their possible degeneracy. For example, when characterizing a membrane conductance through voltage-clamp (VC) experiments, one would like to know if the data will constrain the parameters and how reliable their estimates are. Similarly, when studying the responses of a neuron with multiple conductances in current clamp (CC), how robust is the model to changes in peak conductances. Such degeneracy is linked to biological robustness [1] and is key in understanding the constraints posed by conductance distributions on dendritic computation [2].

Methods
A one-compartment model with Hodgkin-Huxley (HH) type conductances was used. We studied synthetic and experimental VC data of the H-type conductance (gH) that is widely expressed in neuronal dendrites. We also studied the original HH model in VC and CC. Finally, we considered a stomatogastric ganglion (STG) neuron model in CC. The ordinary differential equation solutions, parameters, and their sensitivities were simultaneously estimated using collocation methods and automatic differentiation. This allowed to solve the non-linear least squares (NLLS) problem associated with each model. Parameter degeneracy manifold iterative tracing was performed based on the singular value decomposition (SVD) of the NLLS residual Jacobian.
Results &Discussion
We identified parameter degeneracy using an SVD-based subset selection algorithm [3] applied to the objective function Jacobian. In the gH model in VC, the 2 least identifiable parameters were the leak conductance (gL) and gHreversal potentials, ELand EH. EHwas constrained by tail current experiments. This left a 1-dimensional (1-D) non-linear solution manifold for the remaining 7 parameters: gL, EH, and peak gHat 5 VC values. In the HH model in VC, 3 parameters were least identifiable: EK, gNaand EL. The HH model in CC exhibited approximate parameter degeneracy with a 1-D solution manifold. Similar results were obtained for the STG model. The role of ELin degeneracy was unexpected. Our results generalize to multi-compartment models.




Acknowledgements
Supported by NIH grant R01 NS130917.

References
1. Marom, S., & Marder, E. (2023). A biophysical perspective on the resilience of neuronal excitability across timescales.Nature Reviews Neuroscience,24, 640–652.https://doi.org/10.1038/s41583-023-00730-9
2. Dewell, R. B., Zhu Y., Eisenbrandt M., Morse R., & Gabbiani F. (2022). Contrast polarity-specific mapping improves efficience of neuronal computation for collision detection. Elife, 11:e79772.https://doi.org/10.7554/eLife.79772
3. Golub, G. H., & Van Loan C. F. (2013). Matrix Computations (4thed.). John Hopkins University Press.https://epubs.siam.org/doi/book/10.1137/1.9781421407944
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P091: Dynamic causal modelling (DCM) for effective connectivity in MEG High-Gamma-Activity: a data-driven Markov-Chain Monte Carlo approach
Sunday July 6, 2025 17:20 - 19:20 CEST
P091 Dynamic causal modelling (DCM) for effective connectivity in MEG High-Gamma-Activity: a data-driven Markov-Chain Monte Carlo approach
P. García-Rodríguez, M. Gilson, J.-D. Lemaréchal, A. Brovelli


CNRS UMR 7289 - Aix Marseille Université, Institut de Neurosciences de la Timone, Campus Santé Timone, Marseille, France


Email: pedro.garcia-rodriguez@univ-amu.fr

Introduction


Model inversion in DCM traditionally considers the application of Bayesian variational schemes,i.e.quadratic approximations in the vicinity of minima in the parameters space [1]. On the other hand, more general Markov Chain Monte Carlo methods (MCMC) opt for an intense use of random numbers to sampling posterior probability distributions. The successful application of either of them highly depend on the correct choice of prior distributions.

Methods

Here we propose an automated workflow combining MCMC with more conventional optimization Gradient-Descent (GD) techniques. Following the bi-linear model [2], a simpler DCM is considered with a matrixAforeffective connectivity and a matrixCfor sensory driving inputs.AlphaandGammafunctions for input profiles complete the modeling scenario.



Model’s parameters are estimated in three parts.Firstly, the matrixAis initialized from a Gaussian distribution with null-mean and variance given by observer-specific or group-level Granger Causality (GC) computed from the data. Next, GD algorithms implement a constraint bounded optimization to keep input parameters within plausible (positive) intervals. The adequacy of the parameters values found are further tested through a Levenberg-Marquard GD form. Finally, a MCMC Bayesian scheme incorporates the covariance of the observation noise in a Multivariate Gaussianlikelihood model. A Generative Model is so completed with parameter's prior distributions based on the GD optimizations mentioned above. Normal or Log-Normal distributions are alternatively used, the later to ensure positive values after sampling when needed.


Results



The approach is applied to High-Gamma Activity induced responses during visuomotor transformation tasks executed by 8 subjects, as reported in [3]. Methods were applied to hundred of trials for each subject, providing a handy data-driven DCM framework to evaluate the plausibility of various model configurations. Observation noise is empirically estimated from the pre-stimulus periods in original trials. Model inversion pipeline tends to support the most realistic model configuration tested, with an apparent relation between the estimated effective connectivityAandmatrixGC(Fig. 1).




Discussion
Comparison of prior and posterior distributions can help distinguish informative from non-informative
parameters.Initialization of matrixAwith structural connectivity instead was tested.





Figure 1. A DCM for high-gamma-activity (HGA). First column: brain regions and model configurations tested (top) and corresponding Granger-Causality (GC) matrix (bottom). Second column: model predictions compared to experimental HGA profiles (top) and relation between GC and estimated effective connectivty matrix A (bottom).
Acknowledgements
A.B. and P.G-R were supported by EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project).
References


[1] Zeidman P, Friston K, Parr T. (2023) A primer on Variational Laplace (VL). Neuroimage 279:120310. doi: 10.1016/j.neuroimage.2023.120310.[2] Chen CC, Kiebel SJ, Friston KJ (2008). Dynamic causal modelling of induced responses. Neuroimage 41(4):1293-1312. DOI: 10.1016/j.neuroimage.2008.03.026. PMID: 18485744.


[3] Brovelli A., Chicharro D., Badier J-M., Wang H., Jirsa V. (2015). Characterization of Cortical Networks and Corticocortical Functional Connectivity Mediating Arbitrary Visuomotor Mapping. J. Neuroscience 35(37):12643-12658. doi: 10.1523/JNEUROSCI.4892-14.2015.

Speakers
avatar for Matthieu Gilson

Matthieu Gilson

chair of junior professor, Aix-Marseille University
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P092: Reversal of Wave Direction in Unidirectionally Coupled Oscillator Chains
Sunday July 6, 2025 17:20 - 19:20 CEST
P092 Reversal of Wave Direction in Unidirectionally Coupled Oscillator Chains

Richard Gast*1, Guy Elisha2, Sara A. Solla3,4 , Neelesh A. Patankar5

1Department of Neuroscience, The Scripps Research Institute, San Diego, US
2Brain Mind Institute, EPFL, Lausanne, Switzerland
3Department of Neuroscience, Northwestern University, Evanston, US
4Department of Physics and Astronomy, Northwestern University, Evanston, US
5Department of Mechanical Engineering, Northwestern University, Evanston, US

*Email: rgast@scripps.edu

Introduction:Chains of coupled oscillators have been used to model animal behavior such as crawling, swimming, and peristalsis [1]. In such chains, phase lags between adjacent oscillators yield a propagating wave, which can either be anterograde (from proximal to distal) or retrograde (from distal to proximal). Switches in the direction of wave propagation have been related to increased flexibility, but also to pathology in biological systems. In Drosophila larvae, for example, switches in wave propagation are required for crawling, which has been achieved in a coupled oscillator chain model by applying an extrinsic input to distinct ends of the chain [2].



Methods:In this work, we explore a different, novel mechanism for reversing the wave propagation direction in a chain of unidirectionally coupled limit cycle oscillators. Instead of requiring tuned coupling or precisely timed local inputs, changes in the global extrinsic drive to the chain of oscillators suffices to control the direction of wave propagation. To this end, we consider a chain of unidirectionally coupled Wilson-Cowan (WC) oscillators [3]. The system is driven bySEandSI, which are extrinsic inputs globally applied to all excitatory and inhibitory populations in the chain, respectively.


Results:Combining numerical simulations and bifurcation analysis, we show that waves can propagate in anterograde or retrograde directions in the unidirectional chain of WC oscillators, despite uniform coupling and extrinsic input strengths across the chain [4]. We find that the direction of propagation is controlled by a disparity between the intrinsic frequency of the proximal oscillator and that of the more distal oscillators in the chain (see figures in [4]). The transition between these two behaviors finds explanation in the proximity of the chain's operational regime to a homoclinic bifurcation point, where small changes in the input translate to strong shifts in the oscillation period.

Discussion:Lastly, we discuss wave propagation in the context of phase oscillator networks. We describe a direct relationship between the intrinsic frequency differences between the proximal and distal chain elements, and the phase shift parameter of a phase coupling function [4]. This way, we analytically extend our numerical results to a more general phase oscillator model. Our work emphasizes the functional role that the existence of a homoclinic bifurcation plays for activity propagation in neural systems. The ability of this mechanism to operate on time scales as fast as the neural activity itself suggests that it could dynamically emerge in a variety of biological systems.





Acknowledgements
This work was funded by the by the National Institutes of Health (NIDDK Grant
No. DK079902 and No. DK117824), and National Science Foundation (OAC Grant No. 1931372).
References
[1] Kopell, N., & Ermentrout, G. B. (2003).The Handbook of Brain Theory and Neural Networks.

[2] Gjorgjieva, J., Berni, J., Evers, J. F., & Eglen, S. J. (2013).Frontiers in Computational Neuroscience, 7, 24.

[3] Wilson, H. R., & Cowan, J. D. (1972).Biophysical journal, 12(1), 1-24.
[4] Elisha, G., Gast, R., Halder, S., Solla, S. A., Kahrilas, P. J., Pandolfino, J. E., & Patankar, N. A. (2025).Physical Review Letters, 134(5), 058401.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P093: A method for generalizing validated single neuron models to arbitrary dendritic tree morphologies
Sunday July 6, 2025 17:20 - 19:20 CEST
P093 A method for generalizing validated single neuron models to arbitrary dendritic tree morphologies

Naining Ge, Linus Manubens-Gil*, Hanchuan Peng*

Institute for Brain and Intelligence, Southeast University, Nanjing, China

* Email: linus.ma.gi@gmail.com

* Email: h@braintell.org
Introduction

Single neuron models are vital for probing neuronal excitability, yet their electrophysiological properties remain tightly coupled to individual morphologies as in databases like the Allen Cell Types [1], hindering structure-function studies. Current frameworks, such as evolutionary algorithms linking morphology to electrical parameters [2] and compartment-specific adaptations based on input resistance [3], lack scalability, raising questions about robustness when applied to the variability observed in thousands of neurons.


Methods
We introduced a method to adjust single neuron models using morphological features and to validate their generalizability. We tested whether adjusting membrane conductance proportionally to dendritic surface ratios in thousands of single neuron morphologies enables robust generalization of electrophysiological features across morphologies. We validated generalization via two simulation phases: each (1) Allen-fitted model and (2) generalized model adapted to the remaining same-species morphologies. We compared electrophysiological features from Allen-fitted models, simulations (1) and (2) against experimental data. We used an MLP to further refine parameters using morphological features.

Results
Total dendritic surface area emerged as a decisive morphological feature that correlates with various experimentally measured electrophysiological features (e.g., rheobase, frequency-intensity slope). Generalization using the method proposed by Arnaudon et al. [3] led to artifactual firing properties in a large subset of the tested morphologies. When we generalized models normalizing total dendritic passive conductance, models showed responses within experimental ranges, demonstrating good biological fidelity. MLP-based prediction reached 15% mean absolute error in the prediction of model parameter sets.

Discussion
Our results suggest a promising path towards generalization of validated single neuron models to arbitrary morphologies within a defined electrophysiological cell type. By adapting existing validated models to a broad range of single neuron morphologies, our method offers a framework for large-scale studies of structure-function relationships in neurons and establishes a foundation for optimization of multi-scale neural networks.





Acknowledgements
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 32350410413 awarded to LMG.
References
[1] https://doi.org/10.1038/s41467-017-02718-3
[2]https://doi.org/10.1016/j.patter.2023.100855

[3] https://doi.org/10.1016/j.isci.2023.108222
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P094: Two coupled networks with a tricritical phase boundary between awake, unconscious and dead states capture cortical spontaneous activity patterns
Sunday July 6, 2025 17:20 - 19:20 CEST
P094 Two coupled networks with a tricritical phase boundary between awake, unconscious and dead states capture cortical spontaneous activity patterns


Maryam Ghorbani1,2*, Negar Jalili Mallak3, Mayank R. Mehta4,5,6
1Department Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
2Rayan Center for Neuroscience and Behavior, Ferdowsi University of Mashhad, Mashhad, Iran
3School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ
4UCLA, W.M. Keck Center for Neurophysics, Department of Physics and Astronomy, Los Angeles
5UCLA, Department of Neurology, Los Angeles, CA, United States of America
6UCLA, Department of Electrical and Computer Engineering, Los Angeles
*Email: maryamgh@um.ac.ir



Introduction
A major goal in systems neuroscience is to develop biophysical yet minimal theories that can explain diverse aspects of in vivo data accurately to reveal the underlying mechanisms. Under a variety of conditions, cortical activity shows spontaneous Up- and Down-state (UDS) fluctuations (1, 2). They are synchronous across vast neural ensembles, yet quite noisy, with highly variable amplitudes and durations (3). Here we tested the hypothesis that this complex pattern can be captured by just two weakly coupled, noiseless, excitatory-inhibitory (E-I) networks.
Methods
The model consisted of two mean-field E-I networks, with recurrent, long-range excitatory connections. The LFP and single unit responses were measured from various parts of the parietal and frontal cortices of 8 naturally resting rats using tetrodes. Parietal cortical LFP in anesthetized mice was measured from 116 animals from the deeper parts of the neocortex. The animals were anesthetized using urethane only once during this recording session.
Results
The model could reproduce recently observed periodic versus highly variable UDS in strongly versus weakly coupled organoids respectively. The same model could quantitatively capture the differential patterns of UDS in vivo during anesthesia and natural NREM sleep. Further, by varying just two free parameters, the strength of adaptation and of recurrent connection between the two networks, we made 18 quantitative predictions about the complex properties of UDS. These not only matched experimental data in vivo, but could reproduce and explain the systematic differences across electrodes and animals.
Discussion
The model revealed that, the cortex remains close to the awake-UDS phase boundary in all the sleep sessions but near awake-UDS-dead tricritical phase boundary during anesthesia. Thus, just two weakly coupled mean-field networks, with only two biophysical parameters, can accurately capture cortical spontaneous activity patterns under a variety of conditions. This has several applications, from understanding stimulus response variability, to anesthesia and cortical state transitions between awake, asleep and unconscious.





Acknowledgements
None
References

1.https://doi.org/10.1523/JNEUROSCI.13-08-03252.1993.
2.https://doi.org/10.1523/JNEUROSCI.19-11-04595.1999.
3.https://doi.org/10.1523/JNEUROSCI.0279-06.2006.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P095: Network Complexity For Whole-Brain Dynamics Estimated From fMRI Data
Sunday July 6, 2025 17:20 - 19:20 CEST
P095 Network Complexity For Whole-Brain Dynamics Estimated From fMRI Data

Matthieu Gilson*1, Gorka Zamora-López2

1Faculty of Medecine, Aix-Marseille University, Marseille, France
2Center for Brain and Cognition, University Pompeu Fabra, Barcelona, Spain

*Email: matthieu.gilson@univ-amu.fr

Introduction


The study of complex networks has shown a fast growth in the past decades. In particular, the study of the brain as a network has benefited from the increasing availability of datasets, such as magnetic resonance imaging (MRI). This has generated invaluable insights about cognition with subjects performing tasks in the scanner, as well as alterations thereof to gain a better understanding of neuropathologies.

Methods
Here we review our recent work on the estimation of effective connectivity (EC) at the whole-brain level [1]. In a nutshell, a network model can be optimized to reproduce and characterize the subject- and task-specific dynamics. This EC is further constrained by the anatomy (via the network topology), yielding a signature of the brain dynamics. Instead of using directly EC as a biomarker, we have recently switched to a network-oriented analysis based on the estimated model, after fitting to data.


Results

In a recent application, we showed how our model-based approach uncovers differences between subjects with disorders of consciousness, from coma (UWS) to minimally conscious (MCI) and controls (awake) [2]. We find that the discrimination across patient types (and controls) can be quantitatively related to measuring whether the modeled stimulation response affects the whole network of brain regions. These results can further be interpreted in terms of over-segregation for UWS just after the stimulation, but more importantly a lack of integration in the sense of propagation of the response to the whole network late after the stimulation. In other words, we obtain personalized and interpretable biomarkers based on the brain dynamics.




Discussion

This framework can be used to quantify network complexity based on in-silico stimulation of a network model whose dynamics are estimated from ongoing data (i.e. without experimental stimulation). We will also discuss this approach with recent work based on statistical physics of out-of-equilibrium dynamic systems (related to time reversibility) that can also be interpreted in terms of network complexity [3].





Acknowledgements
MG received support from the French government under the France 2030 investment plan, under the agreement Chaire de Professeur Junior (ANR-22-CPJ2-0020-01) and as part of the Initiative d’Excellence d’Aix-Marseille Université – A*MIDEX (AMX-22-CPJ-01).

References

[1] https://doi.org/10.1162/netn_a_00117
[2] https://doi.org/10.1002%2Fhbm.26386
[3] https://doi.org/10.1103/PhysRevE.107.024121p { line-height: 115%; margin-bottom: 0.25cm; background: transparent }
p { line-height: 115%; margin-bottom: 0.25cm; background: transparent }
Speakers
avatar for Matthieu Gilson

Matthieu Gilson

chair of junior professor, Aix-Marseille University
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P096: Coding and information processing with firing threshold adaptation near criticality in E-I networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P096 Coding and information processing with firing threshold adaptation near criticality in E-I networks

Mauricio Girardi-Schappo*1, Leonard Maler2, André Longtin3


1Departamento de Física, Universidade Federal de Santa Catarina, Florianopolis SC 88040-900, Brazil

2Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa ON K1H 8M5, Canada

3Department of Physics, University of Ottawa, Ottawa,ON K1N 6N5, Canada


*Email: girardi.s@gmail.com


Introduction
The brain can encode information in output firing rates of neuronal populations or spike patterns. Weak inputs have limited impact on output rates, which challenges rate coding as a sole explanatory mechanism for sensory processing. Spike patterns contribute to perception and memory via sparse, combinatorial codes, enhancing memory capacity and information transmission [1, 2]. Here, we compare these two forms of coding in a neural network with and without threshold adaptation of excitatory neurons, including or excluding inhibitory neurons. This extends our previous study and assess the impact of inhibition on coding properties of adaptive networks.
Methods
We model a recurrent excitatory network incorporating an inhibitory population of neurons, which, in line with biological evidence, acts as a stochastic process independent of immediate excitatory spikes [3-5]. Networks with and without threshold adaptation are compared using measures of pattern coding, rate coding, and mutual information [6]. We examine whether threshold adaptation maintains its functional advantages when weakly coupled inhibitory inputs are introduced. The results are analyzed in the light of self-organized (quasi-)criticality [7], and a new theory for near-critical information processing is proposed.
Results
In the limit of weak inhibition, threshold adaptation maintains its ability to enhance coding of weak inputs via firing rate variance. Adaptive networks facilitate a smooth transition from pattern to rate coding, optimizing both coding strategies. This dynamic is lost in non-adaptive networks, which require stronger inputs for pattern coding. Constant-threshold networks rely on supercritical states for pattern coding, whereas adaptation allows robust coding through a near-critical dynamics. The threshold recovery timescale of 100ms to 1000ms is found to favor the pattern coding of weak inputs, matching experimental observation in dentate gyrus neurons [5]. However, the dynamic range of adaptive networks matches the subcritical regime of constant-threshold networks, contrary to what would be expected by the theory of self-organized criticality alone.
Discussion

Threshold adaptation is a biologically relevant mechanism that enhances weak stimulus processing by pattern coding, while keeping the capacity to perform rate coding of strong inputs. The optimal recovery timescale aligns with observations in the hippocampus and other brain regions. Adaptation improves information transmission, feature selectivity, and neural synchrony [8], supporting its role in sensory discrimination and memory tasks. Our findings reinforce the idea that weakly coupled inhibition does not disrupt threshold adaptation’s advantages, suggesting it is a robust coding mechanism across diverse neural circuits.



Acknowledgements
The authors thank financial support through NSERC grants BCPIR/493076-2017 and RGPIN/06204-2014 and the University of Ottawa’s Research Chair in Neurophysics under Grant No. 123917. M.G.-S. thanks financial support from Fundacao de Amparo a Pesquisa e Inovacao do Estado de Santa Catarina (FAPESC), Edital 21/2024 (grant n. 2024TR002507).
References
1.https://doi.org/10.1016/j.conb.2004.07.007
2.https://doi.org/10.1523/JNEUROSCI.3773-10.2011
3.https://doi.org/10.1038/s41583-019-0260-z
4.https://doi.org/10.1152/jn.00811.2015
5.https://doi.org/10.1101/2022.03.07.483263
6.https://doi.org/10.1007/s10827-007-0033-y
7.https://doi.org/10.1016/j.chaos.2022.111877
8.https://doi.org/10.1523/JNEUROSCI.4906-04.2005
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P097: Towards Optimized tACS Protocols: Combining Experimental Data and Spiking Neural Networks with STDP
Sunday July 6, 2025 17:20 - 19:20 CEST
P097 Towards Optimized tACS Protocols: Combining Experimental Data and Spiking Neural Networks with STDP

Camille Godin*1, Jean-Philippe Thivierge1,2

1School of Psychology, University of Ottawa, Ottawa, Canada.
2Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada


*Email:cgodi104@uottawa.ca
Introduction:Abnormal neuronal synchrony is linked to various pathological conditions, often manifesting as either excessive [1] or reduced oscillatory activity [2]. Thus, modulating brain oscillations through transcranial electric stimulation (TES) could help restore healthy activity. However, TES outcomes remain inconsistent [3], emphasizing the need for deeper understanding of its interaction with neural dynamics. Transcranial alternating current stimulation (tACS) – a form of oscillatory TES - allows for diverse waveforms, yet sinusoidal stimulation remains the predominant choice in both experimental and clinical settings. Optimizing simulation parameters could improve efficacy and reduce variability in outcomes, making TES a more reliable tool.

Methods:We modeled a Spiking Neural Network (SNN) of 1000 excitatory-inhibitory Izhikevich neurons with sparse, recurrent connectivity. We first aimed to replicate neural patterns observed in experimental data [4], where local field potential (LFP) signals were recorded from area V4 of amacaque monkey receiving sinusoidal tACS at 5, 10, 20 and 40 Hz, and SHAM. We tuned the model to match the SHAM condition(Fig 1. A), characterized by noisy delta oscillations, and then introduced external inputs to mimic experimental protocols. Next, we implemented Spike-Timing-Dependant Plasticity (STDP) on excitatory connections(Fig 1. B)and used the model to explore the effects of alternative stimulation waveforms and frequencies.
Results:We performed a series of simulations using a baseline model tuned to SHAM (~3 Hz). The 40 Hz stimulation produced the largest relative increase in power at its respective frequency compared to SHAM (Fig 1. A). Both square and negative sawtooth waves consistently outperformed sinusoidal stimulation in increasing delta-gamma broadband power(Fig 1. C).When tracking the evolution of outward excitatory synaptic connections, it appears that square waves near 10 Hz induce the strongest synaptic changes between pre- and post-simulation, relative to the other tested shapes(Fig 1. C). Notably, the STDP model captured the harmonics observed in experimental data more accurately than the non-plastic model.
Discussion:These findings highlight the relevance of Izhikevich-based SNNs with STDP for optimizing tACS protocols and improving their therapeutic potential. While sinusoidal waveforms remain the standard in tACS, our results suggest that square and negative sawtooth waves may be more effective at enhancing low-frequency synchronous activity in population oscillating within the delta-theta range. Additionally, square waves around 10 Hz induced stronger connectivity changes than other frequencies, aligning with experimental protocols to induce plasticity [5]. We argue that exploring diverse stimulation parameters is crucial to maximize the effectiveness of tACS for sustained network modifications and long-term effects on neural dynamics.



Figure 1. Fig 1. A) Left: SHAM condition in experiments and simulations. Right: Normalized relative power increase at four tACS frequencies. B) STDP integration in the SNN on excitatory connections, with weight distribution changes (black dot = centroid). C) Left: Changes in broadband power between baseline and inputs (no STDP). Right: Post-stimulation centroid relative to baseline, shifts across inputs.
Acknowledgements
C. C. Pack, P. Vieira and M. R. Krause.
References
1. https://doi.org/10.1016/j.clinph.2018.11.013
2. https://doi.org/10.2147/NDT.S425506
3. https://doi.org/10.1371/journal.pbio.3001973
4. https://doi.org/10.1073/pnas.1815958116

5. https://doi.org/10.3389/fncir.2023.1124221
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P098: DelGrad: Exact event-based gradients in spiking networks for training delays and weights
Sunday July 6, 2025 17:20 - 19:20 CEST
P098 DelGrad: Exact event-based gradients in spiking networks for training delays and weights

Julian Göltz*+1,2, Jimmy Weber*3, Laura Kriener*3,2,
Sebastian Billaudelle3,1, Peter Lake1, Johannes Schemmel1,
Melika Payvand$3, Mihai A. Petrovici$2

1Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
2Department of Physiology, University of Bern, Bern, Switzerland
3Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland

*Shared first authorship$shared senior authorship
+Email: julian.goeltz@kip.uni-heidelberg.de
Introduction
Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Incorporating trainable transmission delays, alongside synaptic weights, is crucial for shaping these temporal dynamics. While recent methods have shown the benefits of training delays and weights in terms of accuracy and memory efficiency, they rely on discrete time, approximate gradients, and full access to internal variables like membrane potentials [1]. This limits their precision, efficiency, and suitability for neuromorphic hardware due to increased memory requirements and I/O bandwidth demands.



Methods
To alleviate these issues, building on prior work on exact gradients in SNNs [2] we propose an analytical approach for calculating exact gradients of the loss with respect to both synaptic weights and delays in an event-based fashion. The inclusion of delays emerges naturally within our proposed formalism, enriching the model’s parameter search space with a temporal dimension (Fig. 1a). Our algorithm is purely based on the timing of individual spikes and does not require access to other variables such as membrane potentials.


Results
We investigate the impact of delays on accuracy and parameter efficiency both in ideal and hardware-aware simulations on the Yin-Yang classification task [3]. Furthermore, while previous work on learnable delays in SNNs has been mostly confined to software simulations, we demonstrate the functionality and benefits of our approach on the BrainScaleS-2 neuromorphic platform [4, Fig. 1b], successfully training on-chip-delays, and showing a good correspondence to our hardware-aware simulations (Fig. 1c,d).




Discussion
DelGrad presents an event-based framework for gradient-based co-training of weight and delay parameters, without any approximations. For the first time, we experimentally demonstrate the memory efficiency and accuracy benefits of adding delays to SNNs on noisy mixed-signal hardware. Additionally, these experiments also reveal the potential of delays for stabilizing networks against noise. DelGrad
opens a new way for training SNNs with delays on neuromorphic hardware, which results in fewer required parameters, higher accuracy and ease of hardware training.




Figure 1. a Information flow in an SNN, effect of weights w and delays d on the membrane potential of a neuron, and raster plot of the activity. b Photo of the neuromorphic chip BrainScaleS-2. c Comparison of networks without (blue) and with (orange) delays, showing the benefit of delays. d Our hardware-aware simulation can be used effectively as a proxy for hardware emulation, and confirms these benefits.
AcknowledgementsThis work was funded by the Manfred Stärk Foundation, the EC Horizon 2020 Framework Programme under grant agreement 945539 (HBP) and Horizon Europe grant agreement 101147319 (EBRAINS 2.0), the DFG under Germany’s Excellence Strategy EXC 2181/1-390900948 (STRUCTURES Excellence Cluster), SNSF Starting Grant Project UNITE (TMSGI2-211461), and the VolkswagenStiftung under grant number 9C840.
References
[1] I. Hammouamri, et al. doi: 10.48550/arXiv.2306.17670.
[2] J. Göltz, et al. doi: 10.1038/s42256-021-00388-x.
[3] L. Kriener, et al. doi: 10.1145/3517343.3517380.
[4] C. Pehle, et al. doi: 10.3389/fnins.2022.795876.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P099: Critical Slowing Down and the Hierarchy of Neural Timescales: A Unified Framework
Sunday July 6, 2025 17:20 - 19:20 CEST
P099 Critical Slowing Down and the Hierarchy of Neural Timescales: A Unified Framework

Leonardo L. Gollo*1,2

1Brain Networks and Modelling Laboratory and The Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia.
2Instituto de Física Interdisciplinar y Sistemas Complejos, IFISC (UIB-CSIC), Palma de Mallorca, Spain

*Email: leonardo@ifisc.uib-csic.es

IntroductionResearch on brain criticality often focuses on identifying phase transitions, typically assuming that brain dynamics can be described by a single control parameter [1]. However, this approach overlooks the inherent heterogeneity of neural systems. At the neuronal level, diversity in excitability gives rise to multiple percolations and transitions, leading to complex dynamical behaviors [2]. At the macroscopic level, this heterogeneity enables the brain to operate across a broad hierarchy of timescales [3], ranging from rapid neural responses to external stimuli to slower cognitive processes [4,5]. A critical open question is how the framework of brain criticality, which emphasizes phase transitions, can be reconciled with the observed hierarchy of neural timescales.
MethodsWe employed a theoretical framework integrating nonlinear dynamics and criticality theory to analyze the relationship between hierarchical timescales and proximity to criticality. Specifically, we examined the role of critical slowing down, a phenomenon in which systems near a phase transition exhibit prolonged recovery times following perturbations. Using existing empirical findings on functional brain hierarchy and criticality [6,7,8], we evaluated how regions with slower timescales align with the principles of critical slowing down. Additionally, we explored how this framework supports a balance between sensitivity and stability in neural information processing [9].
ResultsOur analysis indicates that brain regions are not uniformly critical but instead positioned at varying distances from criticality. Regions with slower timescales tend to be situated closer to the critical point due to critical slowing down, while regions with faster dynamics operate in subcritical regimes. This spatiotemporal organization supports a structured coexistence of critical and subcritical dynamics, which enhances both sensitivity to external stimuli and reliable internal processing. Furthermore, this framework naturally gives rise to a hierarchy of timescales, and the coexistence of critical and subcritical dynamics enables a balance between flexibility and robustness, allowing neural systems to dynamically regulate information flow and cognitive processes [9].
Discussion
By integrating brain criticality and hierarchical timescales, our findings offer a novel perspective on neural dynamics. Instead of a uniform critical state, we propose that brain regions exist on a criticality continuum, shaped by their functional roles and temporal properties. This unified framework provides a nonlinear dynamics explanation for the brain’s timescale-based hierarchy, shedding light on its neurophysiological mechanisms. By bridging criticality and hierarchical organization, this work advances our understanding of the fundamental principles governing brain dynamics, offering a foundation for future investigations into neural computation and cognition.



Acknowledgements
We thank our colleagues and collaborators for their insightful discussions and feedback, which have enriched the development of this work. This work was supported by the Australian Research Council (ARC) Future Fellowship (FT200100942), the Ramón y Cajal Fellowship (RYC2022-035106-I), and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M/10.13039/501100011033.
References1.https://doi.org/10.1016/j.pneurobio.2017.07.002
2.https://doi.org/10.7717/peerj.1912
3.https://doi.org/10.1371/journal.pcbi.1000209
4.https://doi.org/10.1098/rstb.2014.0165
5.https://doi.org/10.1523/JNEUROSCI.1699-24.2024
6.https://doi.org/10.1073/pnas.2208998120
7.https://doi.org/10.1371/journal.pcbi.1010919
8.https://doi.org/10.1103/PhysRevX.14.031021
9.https://doi.org/10.1098/rsif.2017.0207
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P100: Dynamic Range Revisited: Novel Methods for Accurate Characterization of Complex Response Functions
Sunday July 6, 2025 17:20 - 19:20 CEST
P100 Dynamic Range Revisited: Novel Methods for Accurate Characterization of Complex Response Functions

Jenna Richardson1, Filipe V. Torres2, Mauro Copelli2 ,Leonardo L. Gollo*1,3

1Brain Networks and Modelling Laboratory and The Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia.
2Departamento de Física, Centro de Ciência Exatas e da Natureza, Universidade Federal de Pernambuco, Recife, Brazil.
3Instituto de Física Interdisciplinar y Sistemas Complejos, IFISC (UIB-CSIC), Palma de Mallorca, Spain.

*Email: leonardo@ifisc.uib-csic.es

Introduction

The neuronal response function maps external stimuli to neural activity, with dynamic range quantifying the input levels producing distinguishable responses. Traditional sigmoidal response functions exhibit minimal firing rate changes at low and high inputs, with marked shifts at intermediate levels, making the 10%-90% response range a reliable dynamic range measure [1]. However, complex response functions [2-11]—such as double-sigmoid or multi-sigmoidal profiles with plateaus—challenge conventional calculations, often overestimating dynamic range. To address this, we propose a classification of response function complexity and introduce alternative dynamic range definitions for accurate quantification.
Methods
We analyzed a set of previously published empirical and computational studies featuring both simple and complex response functions. Additionally, we examined a neuronal model of a mouse retinal ganglion cell with a detailed dendritic structure, capable of generating both simple-sigmoid and complex response profiles. The model incorporated two dynamical elements that modulate energy consumption, either reducing or increasing neuronal activity, leading to the emergence of double-sigmoid response functions. To refine dynamic range estimation, we developed four alternative definitions that selectively consider only discernible response variations while excluding plateaus. These methods were evaluated by comparing their performance with the conventional definition across a range of response functions.
Results
Our findings confirm that the conventional 10%-90% dynamic range definition is effective for simple response functions but often inflates the estimated range for complex profiles due to the inclusion of plateau regions. In contrast, our proposed alternative definitions successfully differentiate meaningful response regions from indistinguishable input levels. Each method produced results that aligned with conventional calculations for simple response functions while offering a more precise generalization for complex cases. Moreover, the neuronal model demonstrated that specific modifications in dendritic dynamics can induce complex response profiles, reinforcing the necessity of improved measurement approaches.
Discussion
Our study reveals the limitations of traditional dynamic range definitions in capturing neuronal response diversity. The proposed classification and alternative calculations reduce arbitrary assumptions, enhancing accuracy across neuronal systems. These methods are generalizable beyond neuroscience, applicable to fields with complex, nonlinear dynamics. Freely available computational tools promote adoption and refinement. By improving dynamic range estimation,this work enhances our understanding of complex response functions.



Acknowledgements
This work was supported by the Australian Research Council (ARC) Future Fellowship (FT200100942), the Ramón y Cajal Fellowship (RYC2022-035106-I), and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M/10.13039/501100011033.
References
1. https://doi.org/10.1371/journal.pcbi.1000402
2. https://doi.org/10.1016/S0896-6273(02)01046-2
3. https://doi.org/10.1103/PhysRevE.85.011911
4. https://doi.org/10.1016/S0378-5955(02)00293-9
5. https://doi.org/10.1021/ja209850j
6. https://doi.org/10.1006/bbrc.1999.1375
7. https://doi.org/10.1103/PhysRevE.85.040902
8. https://doi.org/10.1038/srep03222
9. https://doi.org/10.1038/s41598-023-34454-8
10. https://doi.org/10.1073/pnas.0904784106
11. https://doi.org/10.1007/978-1-4419-0194-1_10
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P101: How random connectivity shapes fluctuations of finite-size neural populations
Sunday July 6, 2025 17:20 - 19:20 CEST
P101 How random connectivity shapes fluctuations of finite-size neural populations

Nils E. Greven*1, 2, Jonas Ranft3, Tilo Schwalger1,2

1Department of Mathematics, Technische Universität Berlin, Berlin, Germany
2Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
3Institut de Biologie de l’ENS, Ecole normale supérieure, PSL University, CNRS, Paris.

*Email: greven@math.tu-berlin.de
Introduction

A fundamental problem in computational neuroscience is to understand the variability of neural population dynamics and their response to stimuli in the brain [1,2]. Mean-field models have proven useful to study the mechanisms underlying neural variability in spiking neural networks, however, previous models that describe fluctuations typically assume either infinitely large network sizesN[3] or all-to-all connectivity [4] assumptions that seem unrealistic for cortical populations. To gain insight into the case of both finite network size and non-full connectivity together, we derive here a nonlinear stochastic mean-field model for a network of spiking Poisson neurons with quenched random connectivity.

Methods
We treat the quenched disorder of the connectivity by an annealed approximation [3] that leads to a simpler fully connected network with additional independent noise in the neurons. This annealed network enables a reduction to a low-dimensional closed system of coupled Langevin equations (MF2) for the mean and variance of the neuronal membrane potentials.We comparethe theory ofthis mesoscopic modelto simulations of the underlying microscopic model.An additional comparison toprevious mesoscopic models(MF1)that neglected the recurrent noise effect caused by quenched disorderallowsto investigateand analytically understand theeffects of taking quenched random connectivityand finite network-sizeinto account.

Results
In comparison, the novel mesoscopic model MF2 well describes the fluctuations and nonlinearities of finite-size neuronal populations and outperforms MF1. This effect can be analytically understood as a softening of the effective nonlinearityof the population transfer function (Fig 1A). The mesoscopic theory predicts a large effect of the connection probability (Fig 1B) and stimulus strength on the variance of the population firing rate (Fig 1C, D) that MF1 cannot sufficiently explain.

Discussion
In conclusion, our mesoscopic theory elucidates how disordered connectivity shapes nonlinear dynamics and fluctuations of neural populations at the mesoscopic scale and showcases a useful mean-field method to treat non-full connectivity in finite-size, spiking neural networks.In the paper presented here, we investigated the effect of quenched randomness on finite networks of Poisson neurons. As an extensionwe can analyzethe annealed approximation for networksof Integrate and-fire neuronswith reset.




Figure 1. A) Population transfer function F for MF2 (always blue) is flatter than for MF1 (always yellow) resulting in different fixed points =intersection with black B) MF2 captures dependence on connection probability p for the variance of the population firing rate r, MF1 is p-independent C,D) Variance of r for different external drive μ is massively different for MF1 vs MF2 and different network sizes
Acknowledgements
We are grateful to Jakob Stubenrauch for useful comments on the manuscript.
References[1] M. M. Churchland, et al., Nat. Neurosci. 13, 369 (2010).

[2] G. Hennequin, Y. Ahmadian, D. B. Rubin, M. Lengyel, K. D. Miller, Neuron 98, 846 (2018).

[3] N. Brunel, V. Hakim, Neural Comput. 11, 1621 (1999).

[4] T. Schwalger, M. Deger, W. Gerstner, PLoS Comput. Biol. 13, e1005507 (2017).
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P102: Spontaneous emergence of slow ramping prior to decision states in a brain-constrained model of fronto-temporal cortical areas
Sunday July 6, 2025 17:20 - 19:20 CEST
P102 Spontaneous emergence of slow ramping prior to decision states in a brain-constrained model of fronto-temporal cortical areas

Nick Griffin1*, Aaron Schurger2,3, Max Garagnani1,4*
1Department of Computing, Goldsmiths - University of London, London (UK)
2Department of Psychology, Crean College of Health and Behavioral Sciences, Chapman University, Orange, CA (USA)
3Institute for Interdisciplinary Brain and Behavioral Sciences, Chapman University, Orange, CA (USA)
4 Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin (Germany)
* Corresponding authors; emails: ngrif003@gold.ac.uk, M.Garagnani@gold.ac.uk

Introduction
Currently an ongoing debate exists over two prevailing interpretations of the pre-movement ramping neural signal known as the readiness potential (RP): the “early-” and “late-decision” accounts[1]. The former holds that the RP reflects planning and preparation for movement – a decision outcome. The latter holds that it is pre-decisional, emerging because a commitment is made only after activity reaches a threshold. Weused a fully brain-constrained neural-network model of six human frontotemporal areas to investigate this issue and the cortical mechanisms underlying the emergence of the RP and spontaneous decisions to act, extending the previous study that developed this neural architecture[2].
Methods
The network was trained via neurobiologically realistic learning mechanisms to induce formation of distributed perception-action cell assembly (CA) circuits. To replicate the experimental settings used to trigger the spontaneous emergence of volitional actions, we repeatedly reset its activity (trial start) and collected the resulting “WTs” (“wait time”: time steps elapsed between trial start and first spontaneous CA ignition) in absence of external stimulation, with neural activity driven only by uniform white noise. We then compared model and human data at both “behavioural” (WT distribution) and “neural activity” (RP index) level, where the simulated RP was defined simply as the total firing activity within the network’s model neurons.
Results
We found that, for select values of the parameters, the simulated WT distribution was statistically indistinguishable from the experimentally measured one. This result was replicated in eight out of ten repeated experiments, the variability being attributed to the noise inherently present in the network. We also found that the simulated RP, displaying the characteristic non-linear buildup, could be fitted to the experimental RP, with a mean square error that was minimal for the parameter set that produced the best-fitting simulated WT distribution. Finally, but importantly,individual trials also revealed sub-threshold fluctuations in CA activity insufficient by themselves for full ignition.
Discussion
We used a 6-area deep, brain-constrained model of frontotemporal cortical areas to simulate neural and behavioural indexes of the spontaneous emergence of simple, spontaneous decisions to act. The noise-driven spontaneous reverberation of activity within CA circuits and their subsequent ignition were taken as model correlates of theemergenceof “free” volitional action intentions and conscious decisions to move, respectively. Replicating both behavioural and brain indexes of spontaneous voluntary movements, the present computational architecture and simulation results offer a neuro-mechanistic explanation for the emergence of endogenous decisions to act in the human brain, providing further support for a late, stochastic account of the RP.



Acknowledgements
None.
References
1. Schurger, A., Hu, P. ‘Ben’, Pak, J., & Roskies, A. L. (2021). What Is the Readiness Potential?Trends in Cognitive Sciences,25(7), 558–570. https://doi.org/10.1016/j.tics.2021.04.001
2. Garagnani, M., & Pulvermüller, F. (2013). Neuronal correlates of decisions to speak and act: Spontaneous emergence and dynamic topographies in a computational model of frontal and temporal areas.Brain and Language,127(1), 75–85. https://doi.org/10.1016/j.bandl.2013.02.001
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P103: Developmental Changes in Circuit Dynamics during Hippocampal Memory Recall
Sunday July 6, 2025 17:20 - 19:20 CEST
P103 Developmental Changes in Circuit Dynamics during Hippocampal Memory Recall

Inês Guerreiro*1, Laurenz Muessig2, Thomas Wills2,Francesca Cacucci1

1Neuroscience, Physiology and Pharmacology, University College London, London, UK
2Cell and Developmental Biology, University College London, London, UK


*Email: ines.completo@gmail.com

Introduction

Replay of spike sequences during sharp wave ripples (SPW-Rs) in the hippocampus during sleep is believed to aid memory transfer from the hippocampus to the neocortex. The generation of SPW-Rs has been widely studied, but most studies focus on adult rats. Since hippocampal memory develops late in development [1,2,3], understanding the developmental changes in circuit dynamics during recall is key to uncovering how memory processing mechanisms mature over time.

Previous studies show that coordinated sequence replay emerges during development and that plasticity between co-firing cells has a higher threshold in pups than in adults [4].
Here, we investigate the mechanistic differences in replay in pups and adults.
Methods
We examined the development of hippocampal activity using LFP and single neuron recordings from the hippocampal CA1 area during post-run sleep in rats. Rats with ages ranging from postnatal days (P)17 to 6 months old were used in our analysis.
We first analysed differences in mean firing rates between interneurons and pyramidal cells during sleep in both pups and adults to assess developmental changes in activity patterns during replay.Next, we examined the firing patterns of identified interneurons during sharp wave ripples. By doing so, we can classify the interneuron subtypes recorded and examine their potential contributions to replay events.
Results
Preliminary results show that during post-run sleep, excitatory and inhibitory neurons in pups have higher firing rates than in adults.This contrasts with run trials, where the frequency of inhibitory neurons is lower in pups. Significant variability in interneuron spiking activity was also observed during both run and sleep, emphasizing the diversity of inhibitory interneurons in the CA1 region. Once the subclasses of interneurons and their behaviour duringSPW-Rsare identified, one can develop a canonical model to examine how the CA1 circuit in pups and adults modulates sequence replay during SWRs.


Discussion
Different types of interneurons participate in SPW-Rs and are recruited differently during replay events [5, 6].Given their essential role in SWR generation, replay, and memory processing, understanding how inhibitory neuron activity differs between pups and adults during run and sleep trials is crucial. These developmental differences in interneuron dynamics may influence memory consolidation processes. This work aims to reveal how the CA1 microcircuit regulates the replay of temporally ordered memory patterns throughout development and to clarify the distinct roles of various inhibitory interneuron types in this process.





Acknowledgements
We acknowledge funding from theWellcome Trust Senior Research Fellowship 220886/Z/20/Z (T.W), and the European Research Council Consolidator Award DEVMEM (FC).
References
1. doi: 10.1038/nn0717-1033a. PMID: 27428652; PMCID: PMC5003643.
2. doi: 10.1126/science.1188224. PMID: 20558720; PMCID: PMC3543985.
3. doi: 10.1126/science.1188210. PMID: 20558721
4. doi: 10.1016/j.cub.2019.01.005
5. doi: 10.1523/JNEUROSCI.19-01-00274.1999
6. doi: 10.1523/JNEUROSCI.3962-09.2010. PMID: 20427657; PMCID: PMC3763476
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P104: AnalySim new features: Jupyter notebook versioning and CSV browser
Sunday July 6, 2025 17:20 - 19:20 CEST
P104 AnalySim new features: Jupyter notebook versioning and CSV browser

Uday Biswas1, Anca Doloc-Mihu2,Cengiz Günay*2


1Computer Science & Engineering, B Tech, National Institute of Technology, Rourkela, India
2Dept. Information Technology, Georgia Gwinnett College, Lawrenceville, Georgia, USA


*cgunay@ggc.edu
Introduction

In this poster, we present the updates in the development of theAnalysimscience gateway for data sharing and analysis. An alpha testing version of the gateway is currently hosted athttps://analysim.tech, supported by the NSF-funded ACCESS advanced computing and data resource. TheAnalysimgateway is anopen sourcesoftware whose source code is hosted athttps://github.com/soft-eng-practicum/AnalySim.AnalySimaims to help with data sharing, data hosting for publications, interactive visualizations, collaborative research, and crowdsourced analysis. Special support is planned for datasets with many changing parameters and recorded measurements, such as those produced by neuronal parameter search studies withlarge numberof simulations. However,AnalySimis not limited to this type of data and allows running custom analysis code in interactive notebooks.Along withJavaScript notebooks provided from ObservableHQ.com, we recently added supportforJupyternotebooks using Pythonand theJupyterLitelibrary.
Methods &Results
AnalySimhas been a participant of the International Neuroinformatics Coordinating Facility (INCF) Google Summer of Code (GSoC) program since 2021. Participation inGSoC2024improved both the user interface and added major new functionality. Parts of the user interface was improved to have a more consistent visual style, and new pages and screens were added to support new functionality. In the backend, several changes were made: (1)to implementJupyternotebooks; (2)to move from Azure to ACCESS infrastructure; (3)tomovefrom using blob storage to PostgreSQL database; and (4)to enableversioningeachofmultiplenotebooksin one projectandtoselect a default notebook as the project description. We are currently looking for testers of the gateway andsolicitingfeedback of the design, current features, and the future vision. In this poster, we will review existing features and introduce new ones from the ongoing development as part ofGSoC2025.
Discussion
AnalySimis developed with the vision of offering features on an interactive web platform that improves visibility of one’s research and helps the paper review process by allowing to reproduce others’ analyses. In addition, it aims to foster collaborative research by providing access to others' public datasets and analysis, by creating opportunities to ask novel questions, to guide one's research, and to start new collaborations or to join existing teams.It aims to be a “social scientific environment”,where one can fork or clone existing projects to customize them, and tag or follow researchers and projects.In addition, one can filter datasets, duplicateanalysesand improve them, and then publish findings via interactive visualizations. In summary,AnalySimaims to be aGithub-like tool specialized for scientific problems - especially when datasets are large and complex as in parameter search.


















Acknowledgements
We thank INCF andGSoCfor supportingAnalysim.This work used Jetstream2 at Indiana University through allocation BIO220033 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.
References
N/A
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P105: Reciprocity-Controlled Recurrent Neural Networks: Why More Feedback Isn't Always Better
Sunday July 6, 2025 17:20 - 19:20 CEST
P105 Reciprocity-Controlled Recurrent Neural Networks: Why More Feedback Isn't Always Better

Fatemeh Hadaeghi1*, Kayson Fakhar1,2, Claus C. Hilgetag1,3



Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg (UKE), Hamburg University, Hamburg Center of Neuroscience, Germany.

MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.

Department of Health Sciences, Boston University, Boston, MA, USA.

*Email: f.hadaeghi@uke.de





Introduction
Cortical architectures are hierarchically organized and richly reciprocal; yet, their connections exhibit microstructural and functional asymmetries: forward connections are primarily driving, backward connections driving and modulatory, with both showing laminar specificity. Despite this reciprocity, theoretical and experimental studies highlight a systematic avoidance of strong directed loops — an organizational principle captured by the no-strong-loop hypothesis — especially in sensory systems [1]. While such an organization may primarily prevent runaway excitation and maintain stability, its role in neural computation remains unclear. Here, we show that reciprocity fundamentally limits the computational capacity of recurrent neural networks.
Methods
We recently introduced efficient Network Reciprocity Control (NRC) algorithms designed to steer asymmetry and reciprocity in binary and weighted networks while preserving key structural properties [2]. In this work, we apply these algorithms to modulate reciprocity in recurrent neural networks (RNNs) within the reservoir computing (RC) framework [3]. We explore both binary and weighted connectivity in the reservoir layer, spanning random and biologically inspired architectures, including modular and small-world networks. We assess the computational capacity of these models by evaluating memory capacity (MC) and the quality of their internal representations, as measured by the kernel rank (KR) metric [4].
Results
Our results show that increasing feedback — via reciprocity — degrades key computational properties of recurrent neural networks, including memory capacity and representation diversity. Across all experiments, increasing link reciprocity consistently reduced memory capacity and kernel quality, with particularly pronounced and linear declines in sparse networks. When weights, sampled from a log-normal distribution, were assigned to binary networks, stronger weights amplified these reciprocity-driven impairments. Furthermore, enforcing “strength” reciprocity (reciprocity in connection weights) caused an exponential degradation of memory and representation quality. These effects were robust across network sizes and connection densities.
Discussion

Our study explores how structural (link) and weighted (strength) reciprocity limit the computational capacity of recurrent neural networks, explaining the underrepresentation of strong reciprocal connections in cortical circuits. Across various network architectures, we show that increasing reciprocity reduces memory capacity and kernel rank, both of which are essential for complex dynamics and internal representations. This effect persists, and often worsens, for log-normal weight heterogeneities. While higher weight variability boosts performance, it does not mitigate reciprocity’s effects. Beyond neuroscience, our findings influence the initialization and training of artificial RNNs, and the design of neuromorphic architectures.



Acknowledgements

Funding of this work is gratefully acknowledged: F.H: DFG TRR169-A2, K.F: German Research Foundation (DFG)-SFB 936-178316478-A1; TRR169-A2; SPP 2041/GO 2888/2-2 and the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510. C.H: SFB 936-178316478-A1; TRR169-A2; SFB 1461/A4; SPP 1212 2041/HI 1286/7-1, the Human Brain Project, EU (SGA2, SGA3).
References


https://doi.org/10.1038/34584

https://doi.org/10.1101/2024.11.24.625064

https://doi.org/10.1126/science.1091277

https://doi.org/10.1016/j.neunet.2007.04.017


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P106: Computational and Experimental Insights into Hippocampal Slice Spiking under Extracellular Stimulation
Sunday July 6, 2025 17:20 - 19:20 CEST
P106 Computational and Experimental Insights into Hippocampal Slice Spiking under Extracellular Stimulation

Sarah Hamdi Cherif*1,2,3, Mathilde Wullen4, Steven Le Cam2, Valentine Bouet4, Jean-Marie Billard4, Jérémie Gaidamour3, Laure Buhry1 , Radu Ranta2

1Université de Lorraine, CNRS, LORIA, France
2Université de Lorraine, CNRS, CRAN, France
3Université de Lorraine, CNRS, IECL, France
4 Normandie Univ, UNICAEN, CHU Caen, INSERM, CYCERON, COMETE, France

*Email: sarah.hamdi-cherif@loria.fr

Introduction

Synaptic plasticity and neuronal excitability in the hippocampus (HC) are altered in schizophrenia [1]. Multi Electrode Array (MEA) recordings following a long-term potentiation (LTP) protocol revealed local field potential (LFP) variations along physiological pathways and high-frequency (HF) activity near the stimulation site [2]. To understand the effect of extracellular stimulation (ES) and explore its relationship with synaptic activity and spike generation, we combined electrophysiological recordings and computational modelling. We applied ES to hippocampal slices around Schaeffer’s collaterals while recording signals near CA3 pyramidal cell bodies, and we developed a computational model to aid interpretation.
Methods
Experiments: in-depth glass microelectrode recordings in CA3 of healthy HC slices. 40 pulses (0.4 ms duration, 7 s apart) at 0.2, 0.4 mA. Signals were processed and filtered above 300 Hz to isolate spiking activity.
Simulations: one multi-compartment CA3 pyramidal neuron model [3], with ES modelled using LFPy as a dipole [4], orthogonal to Schaffer collaterals. Background noise below spiking level was added to reflect environmental HC conditions. We varied the position of the stimulation along the axon and at different distances to explore spike variability. Synaptic inputs included excitatory (dendritic) and inhibitory (somatic) drive [5], generated by a variable-rate Poisson process, simulating activation of cells located closer to the ES.
Results
In the experimental data, each ES pulse triggered a single spike, issued from the same cell according to tentative spike sorting [6]. Lower pulse intensity led to variable latencies. As intensity increased, spike timing arose earlier and became more synchronized(Fig.1.a).

According to our simulations (Fig.1.b), a cell, activated directly by ES, showed spike latencies of 0.25–4 ms that were used to parametrize the Poisson process. The target cell, excited both by ES and synaptic inputs, exhibited later and more dispersed latencies, of 3-8 ms, closer to experimental data, suggesting they capture further-layer activity. Higher intensity had the same effects as experimental data (Fig.1.c).
Discussion
Our findings suggest that the HF activity observed in the MEA recordings results from spiking activity propagating antidromically within CA3, activating recurrent excitatory networks. We plan more recordings to confirm our findings and use the model to include the reproduction of LFP dynamics, focusing on synaptic activity. Also, to work on simulating populations while reducing the cell-models to point-neurons and implementing a network parametrized using the observed spike latencies to approximate its dynamics. Ultimately, we aim to develop a comprehensive computational model of HC electrical activity and synaptic plasticity, in healthy and schizophrenia mouse models [7], to better understand the mechanisms involved.



Figure 1. Figure 1 - (a) Experimental trials at 200µA (left) and 400µA (right), (b) Simulated membrane potentials at 100µA (left) and 250µA (right). Both trials and simulations last 50 ms, with stimulation starting at 5 ms. (c) Boxplots comparing spike latency variability across low/high intensities in both experiment (left) and simulation (right).
Acknowledgements
//
References
[1]https://doi.org/10.3390/ijms22052644
[2]https://doi.org/10.12751/nncn.bc2024.244
[3]https://doi.org/10.1523/JNEUROSCI.1889-24.2025
[4]https://doi.org/10.1007/978-1-61779-170-3_8
[5]https://doi.org/0.3389/fncel.2013.00262.
[6]https://doi.org/10.1088/1741-2552/acc210
[7]https://doi.org/10.1016/j.schres.2020.11.043
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P107: Towards A New Metric for Understanding Neural Population Encoding in Early Vision
Sunday July 6, 2025 17:20 - 19:20 CEST
P107 Towards A New Metric for Understanding Neural Population Encoding in Early Vision

Silviya Hasana*1, Simo Vanni1

1Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland

*Email: silviya.hasana@helsinki.fi

Introduction
Representational fidelityofvisioncan be evaluatedusing adecoding approach[1-3],howeverthe methodisinexplicableand difficult to quantify.This study aims to develop a quantitative metric for vision models based on neural population spike activity.By analyzing the relationship between stimulus features, spike timing, and receptive field locations, we investigate how population spike data encode information.We apply both deterministic and probabilistic approaches to evaluate neural populationsencoding,andsubsequentlyquantitativedecodingcapacityonspatialstimulus information.Aquantitativeperformancemetricis fundamental for advancing functional computational vision models, such as the SMART system[4].



Methods
We useda macaque retina model with simulatedON and OFFparasolunitsin a 2D retinal patch.Stimuli were varied for spatial parameters, such asspatial frequency and orientation.First, we summed the spikes generated within 500msof grating stimulus onset for each unit separately.Then, wecalculated the difference between ON- and OFF-unit spike counts and normalized the responses.Toevaluatewhether activation patterns aligned with thetruestimulus, we binned the responses and applied a deterministic Gabor filteratdifferent orientations.Subsequently, we plan to evaluatemodel performanceusing a Bayesian Ideal Observer, which models prior, likelihood, and posterior as a tuning curve foroptimalstimulus decoding.


Results
Our findings showed the presence of orientation-specific patterns in neural population activity, both in the deterministic and probabilistic approaches. Based on preliminary data analysis and processing in our deterministic approach, we expected a strong match between Gabor kernel prediction and true orientations for parasol ON and OFF cells. Our experimentstested100 sweeps, obtained 100% accuracy for oblique orientation prediction with mean average errorof2.97degrees. The high accuracy of the deterministic approach confirms that simple feature-based encoding mechanisms, such as Gaborfiltermatching, align well with neural responses in the parasol ON and OFF cells.


Discussion
As expected, the modeled retinal ganglion cell population encodes orientation in a structured manner that can be decoded based on receptive field positions in the visual field.Moving forward, we will explorea probabilistic approach by applying curve tuning through a Bayesian Ideal Observer to assessthe reliability of neuron population spike activityencodes stimulus orientation, spatial frequency, and motion direction.The probabilistic approach will incorporate prior and likelihood to reconstruct stimulus orientation. The resultswillassesshowboth deterministic andprobabilisticapproaches complement, andcontributetoneuraldecoding, providing a quantitative metric for evaluating functional vision models.




AcknowledgementsThis work has been supported by Academy of Finland grant No: 361816
References
[1]https://doi.org/10.1371/journal.pcbi.1006897
[2]https://doi.org/10.1038/s41583-021-00502-3
[3]https://doi.org/10.1038/nrn2578
[4]https://doi.org/10.1016/j.brainres.2008.04.024

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P109: From point neurons to biophysically detailed networks: A data-driven framework for multi-scale modeling of brain circuits
Sunday July 6, 2025 17:20 - 19:20 CEST
P109 From point neurons to biophysically detailed networks: A data-driven framework for multi-scale modeling of brain circuits

Beatriz Herrera*1, Xiao-Ping Liu2, Shinya Ito1, Darrell Haufler1Kael Dai1, Brian Kalmbach2, Anton Arkhipov1
1Allen Institute, Seattle WA, 98109, USA
2Allen Institute for Brain Science, Seattle WA, 98109, USA


*Email: beatriz.herrera@alleninstitute.org
Introduction
Patch-seq technique links transcriptomics, neuron morphology and electrophysiology in individual neurons. Recent work at the Allen Institute resulted in a comprehensive mouse and human neuron database with Patch-seq data for diverse cell types. In this study, we carry out a large-scale optimization of generalized-leaky integrate-and-fire (GLIF) models on these Patch-seq data for thousands of cells. Furthermore, anticipating applications of models of diverse cell types for network simulations at multiple levels of resolution, we propose a strategy to convert point-neuron network models into detailed biophysical models.Methods
GLIF modelswere obtained from the Patch-seq electrophysiology recordings, and we quantified the optimization performance for different types of current injection stimulus, as well as comparing with an earlier approach[1].
GLIF biophysical conversion strategyinvolved (1) mapping each GLIF neuron to a biophysical neuron model; (2) replacing GLIF network parameters with corresponding biophysical parameters[2]; (3) estimating conversion factors to translate current-based synaptic weights to conductance-based for each source-target pair; (4) constructing the biophysical network model using the scaling factors from (3).Results
We find that optimizing GLIF models using long-square step current stimuli generalizes better to noise stimuli (than vice versa). With this approach, we obtained GLIF models for a total of 6,460 cells from diverse types of both mouse and human glutamatergic and GABAergic interneurons[3–6].
We tested our GLIF-to-biophysical network conversion on our V1 point-neuron model[2]. We simulated responses to pre-synaptic populations and calculated synaptic weight factors for matching GLIF firing rates. We built the V1 biophysical model, fine-tuning weights to align with recordings and validating againstin vivoNeuropixels data.Discussion
Our work establishes the foundation for more comprehensive simulations of brain networks. We shed light on the relationships between genes and morpho-electrophysiological features by developing models for various cell types with available transcriptomic data from Patch-Seq experiments. Furthermore, our method for transforming point-neuron network models into detailed biophysical models will aid in developing and optimizing such complex models, as point-neuron networks are less computationally intensive and simpler to optimize for reproducing experimental data.



Acknowledgements
We thank the founder of the Allen Institute, Paul G. Allen, for his vision, encouragement, and support. This work was supported by the National Institutes of Health (NIH) under the following award nos.: NIBIB R01EB029813, NINDS R01NS122742 and U24NS124001, and NIMH U01MH130907. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
References
1. Teeter C, Iyer R, Menon V, et al. Nature Communications 2018; 9:
2. Billeh YN, Cai B, Gratiy SL, et al. Neuron 2020; 106:388-403.e18
3. Berg J, Sorensen SA, Ting JT, et al. Nature 2021; 598:151–158
4. Gouwens NW, Sorensen SA, Baftizadeh F, et al. Cell 2020; 183:935-953.e19
5. Chartrand T, Dalley R, Close J, et al. Science 2023; 382:eadf0805
6. Lee BR, Dalley R, Miller JA, et al. Science 2023; 382:eadf6484
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P110: What is membrane electrical excitability?
Sunday July 6, 2025 17:20 - 19:20 CEST
P110 What is membrane electrical excitability?

Lionardo Truqui2, Hector Chaparro-Reza1, Vania Victoria Villegas1, Marco Arieli Herrera-Valdez1*

1Dinamics, Biophysics, and Systems Physiology Laboratory, Universidad Nacional Autónoma de México.
2Graduate Program in Mathematics, Universidad Nacional Autónoma de México


*Email: marcoh@ciencias.unam.mx



Neuronal excitability is a phenomenon that is understood in different ways. A neuron may be regarded as more excitable than another if it responds to a stimulus with more action potentials within a fixed period of time. Another way to think about how excitable a neuron is could be to consider the delay with which it starts to respond to a given stimulus. We use the simplest, 2-dimensional, biophysical model of neuronal membrane potential based on two transmembrane currents carried by sodium and potassium ions similar to the Morris-Lecar model[4], but without leak current [1], to study conditions that should be satisfied by an excitable system, and provide a formal definition of electrical excitability. The model consist only on two currents, a Na and a K current, as small currents are not necessary to generate action potentials [3]. We first establish the notion that a model based on autonomous evolution rules is associated with a family of dynamical systems. For instance, if the parameter representing the input current in the equation for the membrane potential is varied todescribe experimental data in current-clamp experiments, the family is defined at least by the input current, and its members can be associated to different sets of trajectories in phase space. We then proceed to analyse the properties of single dynamical systems by examination of their underlying vector fields. In a similar way as originally proposed by Fitz-Hugh [2], we define a region from which all trajectories are action potentials, and call it the Excitability Region. We also propose a measure to quantify the extent to which a single dynamical system is excitable, and then proceed to compare different degrees of excitability. Since the membrane potential of a neuron is represented by a family of dynamical systems, we then examine which of those systems are excitable under the above definition, and assess which ones are more excitable, as a function of the input current. While doing so, we explore the bifurcation structure of the model taking the input current as the bifurcation parameter, and characterize the changes in excitability induced by varying the sizes of the population of ion channels. Having done so, we define neuronal excitability by extending our definition for a single dynamical system to the whole family in the model. We discuss how our measure of excitability behaves around attractor nodes and attractor foci, and also use our definitions to describe the I-F relations of types I, II, and III, that have been used previously to characterize excitability.



Acknowledgements
Universidad Nacional Autónoma de México
References
[1] Av-Ron, E., Parnas, H., and Segel, L. A. (1991). A minimal biophysical model for an excitable and oscillatory neuron. Biological Cybernetics, 65(6):487–500.
[2] FitzHugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysical journal, 1(6):445–466.
[3] Herrera-Valdez, M. A. and Lega, J. (2011). Reduced models for the pacemaker dynamics of cardiac cells. Journal of Theoretical Biology, 270(1):164–176.
[4] Morris, C. and Lecar, H. (1981). Voltage oscillations in the barnacle giant muscle fiber. Biophysical Journal, 35:193–213.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P111: A model of 3D eye movements reflecting spatial orientation estimation
Sunday July 6, 2025 17:20 - 19:20 CEST
P111 A model of 3D eye movements reflecting spatial orientation estimation

Yusuke Shinji1,Yutaka Hirata*2,3,4

1. Dept. Computer Science, Chubu Univ. Graduate School of Engineering, Kasugai, Japan
2. Dept. AI and Robotics, Chubu Univ. College of Science and Engineering, Kasugai, Japan
3. Center for mathematical science and artificial intelligence, Chubu Univ., Kasugai, Japan
4. Chubu University Academy of Emerging Sciences, Chubu Univ., Kasugai, Japan

*Email: yutaka@isc.chubu.ac.jp



Introduction
Spatial orientation (SO) refers to the estimated self-motion state, formed by integrating multiple sensory inputs. Accurate SO formation is crucial for animals to navigate safely through their environment. However, errors in SO estimation can occur, leading to spatial disorientation (SDO).A typical example of SDO is the somatogravic illusion, in which a forward linear acceleration is erroneously perceived as an upward tilt. The vestibulo-ocular reflex (VOR) is driven by the estimated head motion, generating counter-rotations of the eyes to stabilize vision. Thus, the VOR is a reflection of SO, particularly of head motion states. Currently, we developed a 3D VOR model to elucidate the neural algorithms underlying SO formation.

Methods
The model was configured within the Kalman filter (KF) framework, which estimates hidden physical states from noisy sensor data. In this framework, active 3D head motion is the input to the sensors while passive head motion and 3D visual motion are treated as process noise. These motions are detected by the otolith, semicircular canals, and retina whose outputs are transmitted to the brain. The KF incorporates corresponding sensor models that generate sensory predictions, which are compared with actual sensory outputs. The resulting sensory prediction errors are used to update the estimated head motion state through the KF algorithm.The VOR eye velocity is then produced in the opposite direction to the 3D head motion estimate.

Results
To evaluate the model, we first simulated the somatogravic illusion in goldfish that we recently discovered [1]. The model successfully reproduced the goldfish 3D VOR reflecting somatogravic illusion.Specifically, in the KF, lateral linear head acceleration was misestimated as head roll tilt, resulting in a vertical VOR in goldfish, while head roll tilt motion was correctly estimated as roll tilt.Next, we simulated two representative human vestibular illusions: Off-vertical axis rotation at a constant angular velocity, and Post-rotatory tilt after earth-vertical axis rotation. In both cases, the model misestimated linear head acceleration, reproducing known perceptual errors in humans.

Discussion
These results suggest that our 3D VOR KF model effectively captures the neural computational mechanisms underlying SO formation from noisy sensory signals.Previous studies have demonstrated the cerebellar nodulus and uvula play a critical role in SO formation, specifically in distinguishing head tilt against gravity from head linear translational acceleration [2].As a next step we investigate how the well-characterized cerebellar neuronal circuitry and its synaptic learning rules implement the KF algorithm, utilizing our artificial cerebellum [3]. Understanding this relationship will provide insights into how the brain optimally estimates self-motion and resolves sensory ambiguities.





Acknowledgements
Supported by JST CREST (Grant Number: JPMJCR22P5) and JSPS KAKENHI (Grant Number: 24H02338)


References
1. Tadokoro, S., Shinji, Y., Yamanaka, T., Hirata, Y. (2024). Learning capabilities to resolve tilt-translation ambiguity in goldfish.Front Neurol, 15:1304496. https://10.3389/fneur.2024.1304496
2. Laurens, J. (2022). The otolith vermis: A systems neuroscience theory of the Nodulus and Uvula.Front Neurosci, 16:886284. https://10.3389/fnsys.2022.886284
3. Shinji, Y., Okuno, T., Hirata, Y. (2024). Artificial cerebellum on FPGA: realistic real-time cerebellar spiking neural network model capable of real-world adaptive motor control.Front Neurosci, 18:1220908. https://10.3389/fnins.2024.1220908

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P112: Dopamine effects on firing patterns of in vitro hippocampal neurons
Sunday July 6, 2025 17:20 - 19:20 CEST
P112 Dopamine effects on firing patterns of in vitro hippocampal neurons

Huu Hoang*1, Aurelio Cortese2

1Neural Information Analysis Laboratories, ATR Institute International, Kyoto, Japan
2Computational Neuroscience Laboratories, ATR Institute International, Kyoto, Japan

*Email: hoang@atr.jp

Introduction:Dopamine plays a pivotal role in shaping hippocampal neural activity, yet its impact on network dynamics is not fully understood. We explored this by recording rat embryonic hippocampal neurons in vitro, using electrophysiological techniques, pharmacological manipulations, and spike train analysis. Our findings reveal that dopamine reduces network synchrony, broadening the range of burst dynamics—an effect absent with dopamine antagonists. This study deepens our insight into how dopamine signaling shapes functional hippocampal networks.

Methods:We cultured eight rat embryonic hippocampus samples and used MaxOne microelectrode arrays to record spiking activity from hundreds of electrodes. Baseline spikes were captured without dopamine, then dopamine was added gradually, and spikes were recorded. We assessed synchrony strength via spike coherence in 1 msec bins and examined dopamine’s effect on spike dynamics. Spike bursts (typically in 200-300 ms) were detected, and their similarity index was measured. Using affinity propagation on the similarity index, we identified repeating burst motifs, revealing insights into burst dynamics. We used linear mixed-effect models to statistically evaluate the influence of dopamine on the metrics of interest.

Results:Our study revealed that dopamine lowers synchrony strength, enhances network modularity, and restricts connectivity within modules, while broadening burst pattern variety. At higher dopamine concentrations (300-1000 μM), burst frequency rose, yet burst similarity dropped, with repeating motifs surging 40-50% above baseline. The reduction in synchrony caused by dopamine directly lessened burst pattern similarity, shown by a robust positive correlation between synchrony and similarity changes in eight samples. This relationship disappeared in samples treated with dopamine antagonists, underscoring dopamine’s critical influence on reorganizing network dynamics and its possible role in cognitive processes.

Discussion:We investigated dopamine’s impact on cultured hippocampal neurons using high-density electrode arrays, observing a rise in burst events with pronounced synchrony across hundreds of electrodes after incrementally adding dopamine, consistent with previous studies. This setup provided detailed network-level insights with cellular and millisecond precision, showing dopamine reduced spike synchrony while increasing the number of network modules with more restricted connectivity. Such reorganization may optimize information flow for cognitive functions like memory and decision-making. Dopamine also diversified burst patterns, boosting repeating motifs and lowering burst similarity—an effect blocked by antagonists. These findings suggest dopamine enhances distinct encoding in hippocampal circuits, offering potential implications for understanding cognition and schizophrenia therapies.



Acknowledgements
This study was supported by JST ERATO (JPMJER1801, "Brain-AI hybrid").

References
Hoang H, Matsumoto N, Miyano M, Ikegaya Y, Cortese A. (2025). Dopamine-induced relaxation of spike synchrony diversifies burst patterns in cultured hippocampal networks.Neural Networks; 181:106888. doi: 10.1016/j.neunet.2024.106888.

Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P113: Baseline firing rate of dopaminergic neurons modulates the dopamine response to stimuli
Sunday July 6, 2025 17:20 - 19:20 CEST
P113 Baseline firing rate of dopaminergic neurons modulates the dopamine response to stimuli

M. Duc Hoang*1, Andrea R. Hamilton2,Timothy J. Lewis1, Stephen L. Cowen3,4and M. Leandro Heien2


1Department of Mathematics, University of California, Davis, Davis, CA, USA
2Department of Chemistry & Biochemistry, University of Arizona, Tucson, AZ, USA
3Department of Psychology, University of Arizona, Tucson, AZ, USA
4Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA

*Email: mdhoang@ucdavis.edu

Introduction

Electrical brain stimulation (EBS) targeting dopaminergic (DA) neurons is a valuable tool for investigating dopamine circuits and treating diseases such as Parkinson's disease [1]. However, our understanding of how the temporal structure of stimuli interacts with the firing dynamics of DA neurons to regulate dopamine release remains limited. In this study, we experimentally measure changes in dopamine concentration in response to stimulation of the medial forebrain bundle and develop a data-driven mathematical model to describe this stimulus-evoked dopamine response. Our results demonstrate that the baseline firing rate (BFR) of DA neurons prior to electrical stimulation can strongly modulate the DA response.

Methods
In this study, we use fast-scan cyclic voltammetry (FSCV) to measure changes in dopamine (DA) concentration in the nucleus accumbens (NAc) in response to stimulation of the medial forebrain bundle (MFB) in anesthetized rats [2]. We then implement a modification of the Montague et al. model [3] of DA response to electrical stimulation of DA axons in the MFB. The model includes synaptic facilitation and depression, as well as feedback inhibition through D2 autoreceptors (D2AR). We fit model parameters to the FSCV data from multiple stimulation patterns simultaneously. Importantly, we account for the unknown baseline DA levels in our parameter fits. We also validate the model with additional experimental data sets.
Results
We observe a high degree of variability in the dopamine response in the NAc when the MFB is subjected to identical 20 Hz stimuli across several trials. Specifically, the peak change in DA concentration differs by ~40% between trials (Fig. 1A). Simulations of our model show a similarly large variation in peak DA concentration in response to 20 Hz stimulation when the baseline firing rate (BFR) of simulated DA neurons is varied from 0 to 6 Hz, even though the corresponding variation in baseline DA concentration is below 0.02M (Fig. 1B). We use phase-plane analysis to elucidate the mechanism underlying this phenomenon and describe how the phenomenon is influenced by BFR, dopamine reuptake, and D2AR inhibition.
Discussion
Our experimental and modeling results suggest that small fluctuations in baseline DA concentrations in the NAc due to changes in BFR of DA neurons, D2AR levels, or DA reuptake rates can significantly alter the DA response to MFB stimulation. Analysis of the model reveals that the underlying mechanism of this phenomenon involves the interplay of the firing rate of DA neurons, DA reuptake dynamics, and synaptic depression. These findings underscore the importance of BFR in modulating dopamine release during EBS, suggesting that BFR may influence the efficacy of EBS in treating disorders such as Parkinson’s disease, depression, and schizophrenia. This insight could inform the optimization of EBS protocols for therapeutic applications.



Figure 1. The baseline firing rate (BFR) leads to substantial variation in DA response to identical stimulation. A: The change in DA in the NAc in response to a 20 Hz periodic stimulation applied to the MFB. 3 trials are shown of the same stimulus. B: DA concentration profiles predicted by the modified Montague et al. model in response to 20 Hz stimulation for 5 different BFRs (0-6 Hz).
Acknowledgements
Funding for this project is provided by the National Institutes of Health (R01 NS123424-01). A.R.H. was funded by T32 GM008804.
References
[1] https://doi.org/10.3171/2019.4.JNS181761
[2] https://doi.org/10.1021/acschemneuro.4c00115
[3] https://doi.org/10.1523/JNEUROSCI.4279-03.2004
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P141: Intrinsic neuronal properties shape local circuit inhibition in primate prefrontal cortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P141 Intrinsic neuronal properties shape local circuit inhibition in primate prefrontal cortex

Nils A. Koch*1, Benjamin W. Corrigan2,3,4, Julio C. Martinez-Trujillo3,4,5, Anmar Khadra1,6

1Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
2Department of Biology, York University, Toronto, ON, Canada
3Department of Clinical Neurological Sciences, London Health Sciences Centre, Western University, London, ON, Canada
4Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
5Western Institute for Neuroscience, Western University, London, ON, Canada
6Department of Physiology, McGill University, Montreal, QC, Canada

*Email: nils.koch@mail.mcgill.ca
Introduction

Intrinsic neuronal properties play a key role in neuronal circuit dynamics. One such property evident during step-current stimulation is intrinsic spike frequency adaptation (I-SFA), a feature noted to be important for in vivo activity [1] and computational capabilities of neurons [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. In behaving animals, extracellular recordings exhibit extrinsic spike frequency adaptation (E-SFA) in response to sustained visual stimulation. However, the relationship between the I-SFA measured in vitro, typically in response to constant step-current pulses, and the E-SFA described in vivo during behavioral tasks, in which the inputs into a neuron are likely variable and difficult to measure, is not well characterized.
Methods
To investigate how I-SFA in neurons isolated from brain networks contributes to E-SFA during behavior, we recorded responses of macaque lateral prefrontal cortex neurons in vivo during a visually guided saccade task and in acute brain slices in vitro. Units recorded in vivo and neurons recorded in vitro were classified as broad spiking (BS) putative pyramidal cells and narrow spiking (NS) putative inhibitory interneurons based on spike width. To elucidate how in vitro I-SFA contributes to in vivo E-SFA, we bridge the gap between the in vitro and in vitro recordings with a data-driven hybrid circuit model in which NS neurons fit to the in vitro firing behavior are driven by local BS input.
Results
Both BS and NS units exhibited E-SFA in vivo. In acute brain slices, both cell types displayed differing magnitudes of I-SFA but with timescales similar to E-SFA. The model NS cell responses show longer SFA than observed in vivo. However, introduction of inhibition of NS cells to the model circuit removed this discrepancy and reproduced the in vivo E-SFA, suggesting a vital role of local circuitry in dictating task-related in vivo activity. By exploring the relationship between individual neuron I-SFA and hybrid circuit model E-SFA, the contribution of I-SFA to E-SFA is uncovered. Specifically, this contribution is dependent on the timescale of I-SFA and modulates in vivo response magnitudes as well as E-SFA timescales.
Discussion
Our results indicate that both I-SFA and inhibitory circuit dynamics contribute to E-SFA in LPFC neurons during a visual task and highlight the contribution of both single neurons and network dependent computations to neural activity underlying behavior. Furthermore, the interaction between excitatory input and I-SFA demonstrates that inhibitory cortical neurons do not solely contribute to the local circuit inhibition by altering the sign of signals (i.e. from excitation to inhibition) and that the intrinsic properties of NS neurons contribute to their activity in vivo. Consequently, large models of cortical networks as well as artificial neuronal nets that emphasize network connectivity may benefit from including intrinsic neuronal properties.



Acknowledgements
This work was supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant to A.K.; Canadian Institutes of Health Research (CIHR), NSERC, Neuronex (ref. FL6GV84CKN57) and BrainsCAN grants to J.C.M.-T.; and an NSERC Postgraduate Scholarship-Doctoral Fellowship to N.A.K..
References
1. https://doi.org/10.3934/mbe.2016002
2. https://doi.org/10.1016/j.cub.2020.11.054
3. https://doi.org/10.1007/s10827-007-0044-8
4. https://doi.org/10.1523/JNEUROSCI.4795-04.2005
5. https://doi.org/10.1038/s41467-017-02453-9
6. https://doi.org/10.1523/ENEURO.0305-18.2020
7. https://doi.org/10.1016/j.conb.2013.11.012
8. https://doi.org/10.1016/j.biosystems.2022.104802
9. https://doi.org/10.1007/s00422-009-0304-y
10. https://doi.org/10.1523/JNEUROSCI.1792-08.2008
11. https://doi.org/10.1016/j.neuron.2016.09.046
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -