Loading…
Type: Posters clear filter
arrow_back View All Dates
Monday, July 7
 

16:20 CEST

P001: A Recurrent Neural Network Model of Cognitive Map Development
Monday July 7, 2025 16:20 - 18:20 CEST
P001 A Recurrent Neural Network Model of Cognitive Map Development

Marco P. Abrate*1, Tom J. Wills1, Caswell Barry1

1Department of Cell and Developmental Biology, University College London, London, UK

*Email: marco.abrate@ucl.ac.uk
Introduction
Animals use an allocentric cognitive map of self-location, constructed from sequential egocentric observations, to navigate flexibly[1-4]. In the hippocampal formation, spatially modulated neurons support navigation, such as place cells, Head Direction (HD) cells, grid cells, and boundary cells[5-8]. The early development of these neurons is well characterised[9] but the mechanisms driving maturation and the relative timing of their emergence are unclear. We hypothesize that changes in locomotion shape the development of spatial representations. Combining behavioural analysis with a Recurrent Neural Network (RNN), we prove that movement statistics determine the development of spatial tuning, mirroring biological timelines.
Methods
Rats from post-natal day 12 (P12) to P25[10-12] were grouped according to their movement statistics. Rodent trajectories were simulated, using the RatInABox toolbox[13], in a square arena matching these locomotion stages. An RNN was trained to predict upcoming visual stimuli based on previous visual and vestibular inputs - mimicking the predictive coding function of biological systems[14]. The hidden units' activity was analysed against the position and the facing direction of the agent. Finally, these units were classified as place units based on their spatial information content or as HD units based on their Rayleigh vector length and KL divergence vs a uniform distribution - standard metrics for hippocampal neural recordings.
Results
Behavioural analysis revealed three distinct stages of locomotion during development with median ages P14, P15, and P21, respectively (Fig. 1a). The RNN trained on adult-like locomotion (Fig. 1b), solving the predictive task with biologically plausible inputs, showed spatially tuned units resembling hippocampal place and head direction cells (Fig. 1c). Crucially, when trained separately on simulated locomotion styles corresponding to the identified developmental stages, the model recapitulated the progressive emergence of spatial tuning observed experimentally. Specifically, spatial measures and consequently the number of units classified as place and head direction neurons steadily increased with improved locomotion (Fig. 1d).
Discussion
Our model establishes locomotion-dependent sensory sampling as a sufficient mechanism for cognitive map formation, extending predictive coding theories[3,4,15]. The RNN's ability to replicate spatial cell maturation patterns suggests that sensory-motor experience significantly shapes hippocampal spatial tuning. Furthermore, our results inform how manipulations of locomotion or sensory inputs could influence the development of spatial representations, which can then be tested in real-world experiments. Future work will directly compare the RNN's units with hippocampal neurons through Representational Similarity Analysis, search what drives grid patterns formation in our model, and investigate the changes in the geometry of the latent space.
Figure 1. (a) 3-d UMAP representation of rats’ movement statistics coloured into locomotion stages. (b) Example of an agent’s trajectory (left) and snapshot of current visual input (right). (c) Architecture of the RNN. The latent space’s units are analysed for spatial responses. (d) Trend in the number of the RNN’s units classified as Place units (left), HD units (right), or Place and HD units (both).
Acknowledgements
NA
References

1. doi.org/10.1017/S0140525X00063949
2. doi.org/10.1037/h0061626
3. doi.org/10.1016/j.tics.2018.07.006
4. doi.org/10.1038/nn.4650
5. doi.org/10.1016/0006-8993(71)90358-1
6. doi.org/10.1523/JNEUROSCI.10-02-00420.1990
7. doi.org/10.1038/nature03721
8. doi.org/10.1523/JNEUROSCI.1319-09.2009
9. doi.org/10.1002/wcs.1424
10. doi.org/10.1126/science.1188224
11. doi.org/10.1016/j.neuron.2015.05.011
12. doi.org/10.1016/j.cub.2019.01.005
13. doi.org/10.7554/eLife.85274
14. doi.org/10.1017/S0140525X12000477
15. doi.org/10.1016/j.cell.2020.10.024
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:Aptos; panose-1:2 11 0 4 2 2 2 2 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:536871559 3 0 0 415 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0cm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Aptos",sans-serif; mso-ascii-font-family:Aptos; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Aptos; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Aptos; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"Aptos",sans-serif; mso-ascii-font-family:Aptos; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Aptos; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Aptos; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}div.WordSection1 {page:WordSection1;}

Speakers
MP

Marco P. Abrate

PhD candidate, University College London
Computational neuroscience (neuroAI) at UCL
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P108: Bayesian Inference Across Brain Scales
Monday July 7, 2025 16:20 - 18:20 CEST
P108 Bayesian Inference Across Brain Scales

M. Hashemi*1, N Baldy1, A. Ziaeemehr1, A. Esmaeili1, S. Petkoski1, M. Woodman1, V. Jirsa*1

1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
Email :Meysam.hashemi@univ-amu.fr/viktor.jirsa@univ-amu.fr


Introduction

The process of inference across spatiotemporalscalesis essential toidentifythe underlyingcausalmechanisms of brain computation and (dys)function. However, there remains a critical need for automated model inversion tools to estimate control (bifurcation) parameters from recordings acrossbrainscales, ideally including uncertainty.

Methods
In this work, we attempt to bridge this gap by providing efficient and automatic Bayesian inference operating across scales. We usethestate-of-the-art probabilistic machine learning tools employing likelihood-based (MCMC sampling [1, 2]) and likelihood-free (a.k.a. simulation-based inference [3, 4]) approaches.

Results
We demonstrate inference on the parameters and dynamics of spiking neurons, their mean-field approximation at the regional level, and brain network models. We show the benefits of incorporatingprior andinference diagnostics, leveraging self-tuning Monte Carlo strategies for unbiased sampling, and deep density estimators for efficient transformations[5]. The performance of these methods is then demonstratedfor causal inference inepilepsy [6], multiple sclerosis [7], focal intervention [8], healthy aging [9], and social facilitation [10].

Discussion
This work shows potential to improve hypothesis evaluation in across brain scales through uncertainty quantification, and contribute to advances in precision medicine by enhancing the predictive power of brain models.
Figure 1. Bayesian inference across brain scales. (A) Based on Bayes’ theorem, background knowledge about control parameters (expressed as a prior distribution), is combined with information from observed data (in the form of a likelihood function) to determine the posterior distribution. (B) Examples of the observed and predicted data features.
Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project), No. 101137289 (Virtual Brain Twin Project), and government grant managed by the Agence Nationale de la Recherche reference ANR-22-PESN-0012 (France 2030 program).
References

[1] DOIhttps://doi.org/10.1016/j.neuroimage.2020.116839
[2]DOI:10.1162/neco_a_01701
[3]DOI:10.1088/2632-2153/ad6230
[4]Doi:https://doi.org/10.1101/2025.01.21.633922
[5]DOI:https://doi.org/10.1101/2024.10.25.620245
[6]DOI:https://doi.org/10.1016/j.neunet.2023.03.040
[7]DOI:10.1016/j.isci.2024.110101
[8]DOI:https://doi.org/10.1101/2023.09.08.556815
[9]DOI:https://doi.org/10.1016/j.neuroimage.2023.120403
[10]DOI:https://doi.org/10.1101/2024.09.09.612006

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P114: Modelling the bistable cortical dynamics of the sleep-onset period
Monday July 7, 2025 16:20 - 18:20 CEST
P114 Modelling the bistable cortical dynamics of the sleep-onset period

Zhenxing Hu*1, Manaoj Aravind1, Nathan Kutz2, Jean-Julien Aucouturier1

1Universit´e Marie et Louis Pasteur, SUPMICROTECH, CNRS, institut FEMTO-ST, F-25000 Besancon, France
2Department of Applied Mathematics and Electrical and Computer Engineering, University of Washington, Seattle USA


*Email: zhenxing.hu@femto-st.fr

Introduction

The sleep-onset period (SOP) exhibits dynamic and non-monotonous changes of electroencephalogram (EEG) with high, and so far poorly understood, inter-individual variability. Computational models of the sleep regulation network have suggested that the transition to sleep can be viewed as a noisy bifurcation [1], at a saddle point which is determined by an underlying control signal or ‘sleep drive‘. However, such models do not describe how internal control signals in the SOP can produce repeated switches between stable wake and sleep states. Hence, we proposed a minimal parameterized stochastic dynamic model (Fig. 1) inspired by the modelling of C. Elegan's backward and forward motion.
Methods
We apply a data-driven embedding strategy for high-dimensional EEG time-frequency signals via interpolating the first SVD mode in wake and sleep states, paird with a parsimonious stochastic dynamical model with a quartic potential function, in which one slowly-varying control parameter drives the wake-to-sleep transition while exhibiting noise-driven bistability. Also, we provide a procedure based on Markov Chain Monte Carlo (MCMC) for estimating the parameters of the model given single observations of experimental sleep EEG data.
Results
In simulation, we found the interactions between the rate of landscape change and noise-leve could reproduce a wide-variety of SOP phenomenology. Besides, using the model to analyze a pre-existing sleep EEG dataset, we found that the estimated model parameters correlate with both subjective sleep reports and objective hypnogram metrics, suggesting that the bistable characteristics of the SOP influence the characteristics of subsequent sleep.
Discussion
Our findings extend and integrate several threads of prior research on SOP dynamics and modeling. Early mechanistic frameworks of sleep-wake regulation (e.g. the two-process model [2] and “flip-flop” switching circuits [3] ) established the concept of a bistable control of sleep and wake states, but these models usually involve many variables and parameters, making them difficult to fit directly to EEG data. Further, our model explicitly captures the SOP dynamics through stochastic dynamical systems, which effectively characterizes the continuous and stochastic nature of sleep-onset phenomena observed empirically, including intermittent reversals or ”flickering” between wake-like and sleep-like states.



Figure 1. Fig 1. Study overview. The sleep-onset period (SOP) has a strongly bistable phenomenology, marked by a non-monotonous decrease of the EEG frequency and high inter-individual variability, seen here in three illustrative spectrograms (top). We model the bistable cortical dynamics of the SOP with a minimally-parameterized stochastic dynamical system.
Acknowledgements
This work is supported by the Marie Skłodowska-Curie Actions (MSCA) Doctoral Networks ( Lullabyte).
References
[1] Yang, D. P., McKenzie-Sell, L., Karanjai, A., & Robinson, P. A. (2016). Wake-sleep transition as a noisy bifurcation.Physical Review E,94(2), 022412.https://doi.org/10.1103/PhysRevE.94.022412
[2]Borbély, A. A., Daan, S., Wirz‐Justice, A., & Deboer, T. (2016). The two‐process model of sleep regulation: a reappraisal.Journal of sleep research,25(2), 131-143.https://doi.org/10.1111/jsr.12371
[3] Lu, J., Sherman, D., Devor, M., & Saper, C. B. (2006). A putative flip–flop switch for control of REM sleep.Nature,441(7093), 589-594.https://doi.org/10.1038/nature04767
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P115: Structural and Functional Brain Differences in Autistic Aging Using Graph Theoretic Analysis
Monday July 7, 2025 16:20 - 18:20 CEST
P115 Structural and Functional Brain Differences in Autistic Aging Using Graph Theoretic Analysis

Dominique Hughes*1, B. Blair Braden2, Sharon Crook1

1School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona, United States of America
2College of Health Solutions, Arizona State University, Tempe, Arizona, United States of America

*Email: dhughe13@asu.edu

Introduction

Recent research indicates that people with autism (ASD) have increased risk for early-onset dementia and other neurodegenerative diseases [1,2,3]. Prior research has found brain differences related to age between ASD and neurotypical (NT) populations, but the ways these differences contribute to increased risk during aging remain unclear [4,5,6]. Our work employs graph theory to analyze structural and functional brain scans from ASD and NT individuals. We use linear regression to identify brain graph measures where age by diagnosis interaction (ADI) is a significant factor in determining graph measure values.

Methods
We obtained T1, diffusion and functional MRI scans from 96 individuals aged 40-75, (n=48 ASD, mean age = 56.4, n=48 NT, mean age = 57.3). The TVB-UKBB and CONN data processing pipelines extract white matter tract weights and functional connectivity values, respectively, between regions listed in the Regional Map 96 brain parcellation [7,8,9]. We conduct 50% consensus thresholding to remove spurious weights. Strength values are found using the Brain Connectivity Toolbox on the structural and functional connectivity matrices [10]. We conduct linear regression to determine if age by diagnosis interaction is a significant predictor of the strength values.
Results
For the structural graphs, ADI was a significant predictor (p<0.01) for strength values for areas of the right and left prefrontal cortex. For the functional graphs, ADI was a significant predictor for strength values for areas of the right prefrontal, parahippocampal, auditory, sensory, and premotor cortices, and the left prefrontal, gustatory and visual cortices.
Discussion
ADI significantly predicted functional strengths over a range of cortices, while structural measures were more selective and varied. Strength values quantifying the prefrontal cortex particularly are significantly predicted by ADI in both structural and functional graph measures. The difference between functional and structural results demonstrate the complexity of identifying ASD specific aging trajectories. To better understand how these measures may affect increased cognitive decline in ASD, future work will analyze the relationship between these graph measures and cognition measures recorded from the same 96 individuals.




Acknowledgements
We would like to acknowledge funding sources for our project, theNational Institute on Aging [P30 AG072980], theNational Institute of Mental Health [R01MH132746; K01MH116098], theDepartment of Defense [AR140105], and theArizona Biomedical Research Commission [ADHS16-162413].
References


https://doi.org/10.1186/s11689-015-9125-6

https://doi.org/10.1002/aur.2590

https://doi.org/10.1177/1362361319890793

https://doi.org/10.1016/j.rasd.2019.03.005

https://doi.org/10.1002/hbm.23345

https://doi.org/10.1016/j.rasd.2019.02.008

https://doi.org/10.3389/fninf.2022.883223

https://doi.org/10.1089/brain.2012.0073

https://doi.org/10.1002/hbm.23506

https://doi.org/10.1016/j.neuroimage.2009.10.003


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P116: Phase-locking patterns in oscillatory neural networks with diverse inhibitory populations
Monday July 7, 2025 16:20 - 18:20 CEST
P116 Phase-locking patterns in oscillatory neural networks with diverse inhibitory populations

Aïda Cunill1, Marina Vegué1,Gemma Huguet*1,2,3

1Department of Mathematics, Universitat Politècnica de Catalunya, Barcelona, Spain
2Institute of Mathematics Barcelona-Tech (IMTech), Universitat Politècnica de Catalunya, Barcelona, Spain
3Centre de Recerca Matemàtica, Barcelona, Spain

*Email: gemma.huguet@upc.edu

Introduction.Brain oscillations play a crucial role in cognitive processes, yet their precise function is not completely understood. Communication through coherence theory [1] suggests that rhythms regulate information flow between neural populations: to communicate effectively, neural populations must synchronize their rhythmic activity. Studies on gamma-frequency oscillations have shown that when input frequency exceeds the target oscillator's natural frequency, oscillators phase-lock in an optimal phase relationship for effective communication [2,3]. Inhibitory neurons play a crucial role in modulating cortical oscillations, and exhibit diverse biophysical properties. We explore theoretically how diverse inhibitory populations influence oscillatory dynamics.


Methods.We use exact mean-field models [4,5] to explore how different inhibitory populations shape cortical oscillations and influence neural communication. We consider a neural network that includes one excitatory population and two distinct inhibitory populations with a network connectivity inspired in cortical circuits. The network receives an external periodic excitatory input in the gamma frequency range, simulating the input from other oscillating neural populations. We use phase-reduction techniques to identify the phase-locked states between the input and the target population as a function of the amplitude, frequency and coherence of the inputs. We propose several factors to measure communication between neural oscillators.
Results.We have developed a theoretical framework to study the conditions for effective communication, exploring the role of different types of inhibitory neurons. We compare phase-locking and synchronization properties in networks with either a single or two distinct inhibitory populations. In a network with a single inhibitory population,communication is only effective for inputs that are faster than the natural frequency of the target oscillator. The inclusion of a second inhibitory population with slower synapses expands 1:1 phase-locking range to both higher and lower frequency inputs and improves the encoding of inputs with frequencies near the natural gamma rhythm of the target oscillator.
Discussion.Our results contribute to understand how different types of inhibitory populations regulate the timing and coordination of neural activity through mean-field models and mathematical analysis. We identify the role of different types of inhibition in generating and maintaining distinct phase-locking patterns, which are essential for communication between brain regions.



Acknowledgements
Work produced with the support of the grant PID-2021-122954NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and “ERDF: A way of making Europe”, the Maria de Maeztu Award for Centers and Units of Excellence in R&D (CEX2020-001084-M) and the AGAUR project 2021SGR1039.
References
1.https://doi.org/10.1016/j.neuron.2015.09.034
2.https://doi.org/10.1111/ejn.12453
3.https://doi.org/10.1371/journal.pcbi.1009342
4.https://doi.org/10.1103/PhysRevX.5.021028
5.https://doi.org/10.1371/journal.pcbi.1007019


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P117: Layer- and Area-Specific Dynamics and Function in Spiking Cortical Neuronal Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P117 Layer- and Area-Specific Dynamics and Function in Spiking Cortical Neuronal Networks





M. Sharif Hussainyar1*, Dong Li1, Claus C. Hilgetag1


1Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf(UKE), 20251, Hamburg, Germany


*Email: m.hussainyar@uke.de



Introduction
The cerebral cortex exhibits significant regional diversity, with areas varying in neuron density and the morphology of specific layers [1]. A spectrum of cortical types ranges from agranular, lacking layer 4, to granular, with a well-differentiated layer 4 [2,3]. These structural differences are relevant to cortical connectivity [4,5] and information flow. Correspondingly, different cortical areas and layers exhibit distinct dynamics and functions [6] underlying their computational roles, with faster neuronal timescales supporting sensory processing and slower dynamics in association areas [7,8]. However, how structural variations across cortical types shape these properties remains unclear.




Methods
We developed a series of spiking network models to simulate different cortical types. Each model consists of leaky integrate-and-fire neurons organized into layers preserving critical structural features, such as the excitatory-inhibitory ratio, layer-specific neuron distributions, and interlaminar connections. To compare evolutionary cortical variations, we parameterized models for three distinct exemplars: rodents, non-human primates and humans, accounting for species-specific differences in cortical organization, neuronal density, and laminar structure patterns [9]. This approach allows us to examine how structural variations shape timescales and baseline activity across cortical types and species.


Results
Fundamental dynamical properties such as timescale and baseline activity differ systematically between cortical types and layers. Granular types, exemplified by microcolumns in the visual system, exhibit shorter timescales than agranular types characteristic of association areas. These differential timescales imply functional specialization, where shorter timescales support rapid sensory processing, while longer timescales in agranular regions facilitate integrative functions requiring extended temporal windows. These findings align with experimental evidence and previous theoretical findings [6,8], and reinforce the hypothesis that structural variations shape cortical dynamics.
Discussion
Our findings confirm that structural variations shape cortical dynamics and function. The observed timescales differences between cortical types align with experimental data and support computational theories of functional specialization [8]. The cortical-type-based connectivities, along with the integrate-and-fire nature of cortical neurons, establish the foundation for area- and layer-specific cortical timescales and baseline activity. These, in turn, define the fundamental functional units by shaping how different cortical areas and layers process and integrate information from external inputs.






Acknowledgements
This work was in part founded by the, SH: Landesforschungsförderung Hamburg(LFF)-FV76. DL:TRR169-A2. CCH: Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), SFB 936, Project-ID 178316478-A1/Z3; Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 434434223 SFB 1461; DFG TRR-169 (A2).

References
[1]https://doi.org/10.1007/s00429-019-01841-9
[2]https://doi.org/10.3389/fnana.2014.00165
[3]https://doi.org/10.1016/j.neuroimage.2016.04.017
[4]https://doi.org/10.1371/journal.pbio.2005346
[5]https://doi.org/10.1093/cercor/7.7.635
[6]https://doi.org/10.1073/pnas.2415695121
[7]https://doi.org/10.1073/pnas.2110274119
[8]https://www.nature.com/articles/nn.3862
[9]https://doi.org/10.1007/s00429-022-02548-0
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P118: Simulating interictal epileptiform discharges in a whole-brain neural mass model with calcium-mediated bidirectional plasticity
Monday July 7, 2025 16:20 - 18:20 CEST
P118 Simulating interictal epileptiform discharges in a whole-brain neural mass model with calcium-mediated bidirectional plasticity

MehmetAlihanKayabas1, Elif Köksal-Ersöz2,3,Linda-Iris JosephTomy1,Pascal Benquet1, Isabelle Merlet1, Fabrice Wendling1
¹UnivRennes, INSERM, LTSI – UMR 1099, Rennes F-35000, France
²InriaLyonResearchCentre, Villeurbanne 69603, France
³CophyTeam, Lyon NeuroscienceResearchCenter, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Bron 69500, France
*Email:malihankayabas@gmail.com
Introduction

Whole-brainmodeling of interictal epileptiform dischargesoffers a promising approach to optimize transcranial direct current stimulation (tDCS) protocols, byidentifyingspecific regions of the brain involved in the epileptic network [1,2]. In this study, we investigated the synaptic plasticity induced bytDCSin a whole-brain network of connected neural mass models (NMMs) [3, 4], which we extended by implementing the calcium-mediated synaptic plasticity mechanisms based on our recent study [5]. We studied the impact oftwolocal parameters:synapticdepression(θd)andpotentiation(θp)thresholds, on long-term depression (LTD) and long-term potentiation (LTP) undertDCS.

Methods
The activity of each node of the network was simulated by NMMs including excitatory and inhibitory neuronal subpopulations. The nodes are interconnected by a structural connectivity matrix fromthe HumanConnectome Project [6]. We tuned parameters of NMMstosimulateinterictal epileptiformdischarges (IEDs), alpha-band activity (8-12 Hz), and background activity. We assumed that the electrical stimulation affects the mean membrane potential of excitatory neuronal subpopulations. We varied thedepression andpotentiation threshold parametersin different subnetworks and simulated the system for 15 min for each condition.Two metrics were evaluated:Functional connectivity calculated using a non-linear correlation coefficient, and mean amplitude per channel.
Results
Under most conditions both the signal amplitude and node strength decreased.(Fig. 1).Except forall_nodes_θdconditionwherethedepression thresholdwas increasedacross all nodeswhichreduced LTD activity, resulting in an increase in strength of epileptic nodes.Regionally,the parietal nodesshowedthemost significantreductionswhile the frontal nodesthe least significantvariations. An increase in potentiation threshold across all nodes (all_nodes_θpcondition) resulted inthehighestreduction in bothamplitude and strength.,When bothθdandθpwere increased simultaneously,the decrease in strength of epileptic nodes was even more pronounced, while increasingθpalone in the occipital nodes did not yield a reduction in epileptic node strength.
Discussion
Variation insynapticplasticity thresholds alters whole-brain network dynamics. In nodesexhibitingalpha-band activity, decreased node strength lowers signal amplitude without changing frequency. Inepileptogenic nodes, reduced node strength leads to lower IED frequency anddesynchronization between two regions of the epileptogenic zone, while increased strength has the opposite effect.As of this day, there is no consensus in the literature on the effect oftDCSon alpha-band[7,8] or on the IED frequency [9, 10].In future studies, we willstudyour model further to elucidate the mechanisms and role oftDCStreatmentin focal epilepsy.




Figure 1. (A) percentage difference in node strength relative to basal level. (B) percentage difference in amplitude relative to basal level. (C) Examples of LFP signals for left lateral occipital (alpha) and precentral (epileptic) nodes before (blue) and after (red) the increase in potentiation threshold. θp: Potentiation threshold and θd: Depotentiation threshold.* Denotes p-value < 0.05 Kruskal-Wallis
Acknowledgements
This project has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (No 855109).
References
1.https://doi.org/10.1093/med/9780199746545.003.0017
2.https://doi.org/10.1093/brain/awz269
3.https://doi.org/10.1016/j.softx.2024.101924
4. https://doi.org/10.1088/1741-2552/ac8fb4
5.https://doi.org/10.1371/journal.pcbi.1012666
6.https://doi.org/10.1016/j.neuroimage.2021.118543
7.https://doi.org/10.3389/fnhum.2013.00529
8.https://doi.org/10.1038/s41598-020-75861-5
9.https://10.1016/j.brs.2016.12.005
10.https://doi.org/10.1016/j.eplepsyres.2024.107320
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P119: Computational modeling of the cumulative neuroplastic effects of repeated direct current stimulation
Monday July 7, 2025 16:20 - 18:20 CEST
P119 Computational modeling of the cumulative neuroplastic effects of repeated direct current stimulation

Linda-Iris JosephTomy1, Elif Köksal-Ersöz2,3,MehmetAlihanKayabas1, Pascal Benquet1, Fabrice Wendling1
¹UnivRennes, INSERM, LTSI – UMR 1099, Rennes F-35000, France
²InriaLyonResearchCentre, Villeurbanne 69603, France
³CophyTeam, Lyon NeuroscienceResearchCenter, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Bron 69500, France
*Email:malihankayabas@gmail.com
Introduction

Metaplasticitymodulates theneuroplasticabilityofneurons/synapsesin orderto maintain it withinafunctionalphysiological range.In conditionssuch asepilepsy, where neuroplasticitymayevolve pathologically, this metaplastic property of synapseswould also bedisrupted[1].Transcranial direct current stimulation (tDCS)isa non-invasivetechniquethatcan modulateneuroplasticity.Repeatedsessions oftDCScanimprove the likelihood of inducingseizure reduction in patients with refractory epilepsy[2].TheeffectoftDCSon neuroplasticityhas also been shown todependon the ongoingneuronalactivity andneuroplastic propertiesofthe stimulated brain regions.

Methods
Computational modeling[3]was usedtoidentify‘functional’ and‘dysfunctional’ metaplastic conditions.The modelconsistedof an epileptogeniczone(EZ)connectedto anirritatedzone (IZ).We assumedthe potentiation threshold(ϴp)discriminatedbetween the‘functional’and‘dysfunctional’metaplastic conditions.We evaluated the variation in connectivity strengthbyinitiatingthe modelfrom depressed and potentiated states for the metaplastic conditionsfordifferent frequencies of interictal activity from the EZ.Theeffectof repeatedtDCSwasinvestigated.Variations in connectivity strengthfor different frequencies ofongoing neuronalactivitywere assessedby plottingthe frequency response function (FRF).
Results
In the‘functional’metaplastic condition, the connectivity strength from EZ to IZwasprevented from being potentiated orevolved towards depression.Whereas, in the ‘dysfunctional’ metaplastic condition, theconnectivity strength tendedto evolve towards potentiation.Further,a decrease in ϴpled totheexpansion of epileptic activity in this network.Under repetitivetDCSapplication,weobserveda downwardshift inthe FRF, suggesting that repetitivetDCScould promotelong term depression.
Discussion
In thisstudy, we exploredhow functional and dysfunctionalmetaplasticconditions affect neuroplasticityin an epileptic network.Theimpact of varying ϴpto switch between thesemetaplasticconditionsreflectedthe relationship betweenmetaplasticityandepileptogenicity, as also seen in animal studies [1].Based on the variationsin the FRF observed here, it may be possible to designtDCSprotocolsto depressthe connectivityfromthe EZ to other IZs. Thismay thenimprovestimulationoutcomes.



Acknowledgements
This project has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (No 855109).
References
● https://doi.org/10.1371/journal.pcbi.1012666

● https://doi.org/10.1155/2017/8087401
● https://doi.org/10.1016/j.brs.2019.09.006


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P120: Critical neuronal avalanches emerge from excitation-inhibition balanced spontaneous activity
Monday July 7, 2025 16:20 - 18:20 CEST
P120 Critical neuronal avalanches emerge from excitation-inhibition balanced spontaneous activity

Maxime Janbon1, Mateo Amortegui1, Enrique Hansen1, Sarah Nourin1, Germán Sumbre1,Adrián Ponce-Alvarez*2,3,4 


1Institut de Biologie de l’ENS (IBENS), Département de biologie, École normale supérieure, CNRS, INSERM, Université PSL, 75005 Paris, France
2Departament de Matemàtiques, Universitat Politècnica de Catalunya, 08028 Barcelona, Spain.
3Institut de Matemàtiques de la UPC - Barcelona Tech (IMTech), Barcelona, Spain.
4Centre de Recerca Matemàtica, Barcelona, Spain.


*Email : adrian.ponce@upc.edu

Introduction

Neural systems exhibit cascading activity patterns called neuronal avalanches that follow power-law statistics, a hallmark of critical systems. Theoretical models [1,2] and in vitro studies [3] suggest that the excitation-inhibition (E/I) balance is a key factor in the self-organization of criticality. However, how E and I dynamics evolve and interact during in vivo neuronal avalanches remains unclear.
Here, we investigated E and I neuron contributions to spontaneous neuronal avalanches using double-transgenic zebrafish expressing cell-type-specific fluorescence proteins and calcium indicators. Furthermore, we built a stochastic E-I network model to explore how critical avalanches depend on the E/I ratio.
Methods
We monitored spontaneous neuronal activity in the optic tectum of 10 zebrafish larvae using selective-plane illumination microscopy (SPIM). Double-transgenic larvae expressing GCaMP6f and mCherry under the Vglut2a promoter (glutamatergic) were combined with immunostaining for GABAergic and cholinergic neurons, allowing identification of excitatory (E), inhibitory (I), and cholinergic (Ch) neurons.
We modelled the collective activity of E and I neurons using a model of critical dynamics that combines stochastic Wilson-Cowan equations[1,4],spatial embedded neuronal connectivity,and a spike-to-fluorescence convolutional model. Critical avalanches arise throughbalanced amplification[1] at a phase transition.
Results
Our results show that spontaneous fluctuations in E and I activity influenced neuronal avalanche statistics in the zebrafish optic tectum. Neuronal avalanches approached criticality when excitatory and inhibitory activity were balanced. Notably, the model accurately captured the observed avalanche statistics and their sensitivity to E/I fluctuations around a critical point defined by balanced excitatory and inhibitory synaptic strengths. Furthermore, the model allowed us to evaluate the statistics of neuronal avalanches derived from different simulated signals, representing calcium events or spiking activity. For both signals, the model's critical exponents align with experimental findings from calcium imaging and electrophysiology [5].
Discussion
Extensive research underscores the functional benefits of E/I balance and critical dynamics. Balanced networks enhance signal amplification, response selectivity, noise reduction, stability, memory, and plasticity [6-8], while critical dynamics optimize information processing [9-11]. Here, we showed that neuronal avalanche statistics and their dependence on spontaneous E/I fluctuations in the zebrafish optic tectum align with a model reaching criticality for balanced E and I couplings. Our study provides a framework to dissect the relationship between criticality and E/I balance, by manipulating the E/I ratio in vivo. Future integration of optogenetics into the present experiments and model will further clarify this interplay.



Acknowledgements
This study was supported by the Project PID2022-137708NB-I00 funded by MICIU/AEI /10.13039/501100011033 and FEDER, UE.A. Ponce-Alvarezwas supported by a Ramón y Cajal fellowship (RYC2020-029117-I) funded by MICIU/AEI/10.13039/501100011033 and “ESF Investing in your future”. G. Sumbre was supported by ERC CoG 726280.
References
1.https://doi.org/10.1371/journal.pcbi.1000846
2.https://doi.org/10.1523/JNEUROSCI.5990-11.2012
3.https://doi.org/10.1523/JNEUROSCI.4637-10.2011
4.https://doi.org/10.1371/journal.pcbi.1008884
5.https://doi.org/10.1126/sciadv.adj9303
6.https://doi.org/10.1088/0954-898X_6_2_001
7.https://doi.org/10.1126/science.274.5293.1724
8.https://doi.org/10.1016/j.neuron.2011.09.027
9.https://doi.org/10.1177/1073858412445487
10.https://doi.org/0.1523/JNEUROSCI.3864-09.2009
11.https://doi.org/10.1016/j.neuron.2018.10.045
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P121: Effects of the nonlinearity, kinetics and size of the gap junction connections on the transient dynamics of coupled glial cells
Monday July 7, 2025 16:20 - 18:20 CEST
P121 Effects of the nonlinearity, kinetics and size of the gap junction connections on the transient dynamics of coupled glial cells

Predrag Janjic*1, Dimitar Solev1, Ljupco Kocarev 1

1Research Center for Computer Science and Information Technologies, Macedonian Academy of Sciences and Arts, Skopje, North Macedonia

*Email: predrag.a.janjic@gmail.com
Introduction- The complex structure of massive couplings among the glial cells is not fully neurobiologically resolved, preventing realistic quantitative models. Despite the ongoing ultrastructural studies and elucidating published research focusing on glia[1] we can't extract statistical data on the number of gap junction (GJ) connections and their size. The nonlinear dependence of GJ conductance on the transjunctional voltage Vj, slow kinetics, and the GJ size-effect[2] on junction polarization suggest a rich repertoire of transient dynamic instabilities of resting glia when invaded by spreading depolarizations. Known limitations of the glial electrophysiology in-situ to measure GJ-coupled cells warrant qualifying suitable models.

Methods- We introduce a detailed point model of a coupled astrocytic cell - including several currents in the membrane kinetics and the nonlinear coupling with inactivation kinetics. Using the paradigm of a single active site in 1-d array of coupled astrocytes the main focus was on describing the bifurcations of the resting voltage Vr in the inner cell. Timescale separation allowed simplifying assumptions that enable formulating an ODE model of a "self-coupled cell", SCC. For stability analysis of such a model the 2nd cell is connected to a depolarized immediate neighbor on one-side, and a still quiet cell on the other side, both represented as fixed voltages Vdr and Vr. The numerical simulations were done on connected 1-d array.
Results- We explored the stability of the SCC in case of altered steady-state I-V curve, displaying N-shaped nonlinearity and generically present saddle-node (S-N) structure[3]. Newly introduced RMP is markedly more depolarized. The separate N-shaped nonlinearity introduced by the coupling enriched the S-N structure in all parameter perturbations. Typical cases were appearances of (a) fold limit cycle window within the range of fold curve, accompanied by a noise-induced bistability switching, or (b) a stable limit-cycle in the moderate coupling strength range. Not all of the observed dynamical regimes in the SCC survive in numerical simulation of an 1-d array, but in all cases we observed traveling front for the corresponding parameters.
Discussion- Emerging evidence from voltage imaging suggests that astrocytes do not respond dynamically as a homogeneous compartment, displaying strong variations in their depolarization between the collateral processes, or when compared to the cell body. In case of altered I-V curves they are generically prone to multistability. We observed enriched multistability scenarios in the passive response of GJ-coupled astrocytes under very basic conditions. We believe it motivates adding additional level of biophysical detail to the GJ connections and the topology of glial networks. Such groundwork is needed to extend the glial models with the advanced dynamical features of neuromodulation of their glutamate and GABA transporters and receptors.



Acknowledgements
The authors are grateful for the experimental recordings from isolated astrocytes shared by Prof. Christian Steinheauser, form the Institute of Cellular Sciences (IZN), School of Medicine, University of Bonn, Germany. PJ and DS were partially funded by R01MH125030 from the National Institute of Mental Health in US.
References
1. Aten,S., et al,(2022) Ultrastructural view of astrocyte arborization, astrocyte-astrocyte and astrocyte-synapse contacts, intracellular vesicle-like structures, and mitochondrial network, Prog Neurobiol, (213), 102264.
2. Wilders,R., and Jongsma,H.J.,(1992) Limitations of the dual voltage clamp method in assaying conductance and kinetics of gap junction channels, Biophys J 63(4), 942-953.
3. Janjic,P., Solev,D., and Kocarev,L.,(2023) Non-trivial dynamics in a model of glial membrane voltage driven by open potassium pores, Biophys J 122(8), 1470-1490.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P122: Encoding visual familiarity for navigation in a mushroom body SNN trained on ant-perspective views
Monday July 7, 2025 16:20 - 18:20 CEST
P122 Encoding visual familiarity for navigation in a mushroom body SNN trained on ant-perspective views

Oluwaseyi Oladipupo Jesusanmi1,2, Amany Azevedo Amin2, Paul Graham1,2, Thomas Nowotny2
1Sussex Neuroscience, University of Sussex, Brighton, United Kingdom
2Sussex AI, University of Sussex, Brighton, United Kingdom
*Email: o.jesusanmi@sussex.ac.uk

Introduction

Ants can learn long visually guided routes with limited neural resources, mediated by the mushroom body brain region acting as a familiarity detector[1,2]. In the mushroom body, low dimensional input from sensory regions is projected into a large population of neurons, producing sparse representations of input information. These representations are learned via an anti-Hebbian process, modulated through dopaminergic learning signals. In navigation, the mushroom bodies guide ants to seek similar views to those previously learned on a foraging route. Here, we further investigate the role of mushroom bodies in ants’ visual navigation with a spiking neural network (SNN) model and 1:1 virtual recreations of ant visual experiences.
Methods
We implemented the SNN model in GeNN[3]. It has 320 Visual Projection Neurons (VPNs), 20000 Kenyon Cells (KCs), one Inhibitory Feedback Neuron (IFN) and one Mushroom Body Output Neuron (MBON). We used Deeplabcut to track ant trajectories in behavioural experiments. We used phone camera input into Neural Radiance Field (NeRF) and photogrammetry software for environment reconstruction. We used Isaac Sim and NVIDIA Omniverse to recreate views along ants’ movement trajectories from the perspective of the ants. We trained the SNN and comparator models (perfect memory and infomax[4]) on these recreations. We modelled inference across all traversable areas of the environment to test each model’s ability to encode navigational information.
Results
We produced familiarity landscapes for our SNN mushroom body and comparator models, showing differences between how they encode off-route (unlearned) locations. The mushroom body model produced navigation accuracy comparable to the other models. We found that the mushroom body model activity was able to explain trajectory data in trials where ants reached the target location. We found some views resulting in high familiarity did not appear in the training set. These views have similar image statistics to images in the training set, even if the view is from a different place in the environment. We found that ant trajectory routes with higher rates of oscillation improved learning, “filling-in” more of the familiarity landscapes.
Discussion
How the mushroom body would respond across all locations in a traversable environment is not known and is normally not feasible to study. Neural recording in ants remains difficult, and there are limited methods to have an ant systematically experience an entire experimental arena. We addressed this issue via simulation of biologically plausible neural activity while having exact control of what the model sees. Visual navigation models have been compared with mushroom body models in terms of navigation accuracy, but the familiarity landscape produced by the varied models has not been compared. Our investigation provides insights into how encoding of familiarity differs and leads to accurate navigation between models.



Acknowledgements.
References
● https://doi.org/10.1016/J.CUB.2020.07.013
● https://doi.org/10.1016/J.CUB.2020.06.030
● https://doi.org/10.1038/srep18854
● https://doi.org/10.1162/isal_a_00645


Speakers
avatar for Thomas Nowotny

Thomas Nowotny

Professor of Informatics, University of Sussex
I do research in computational neuroscience and bio-inspired AI. More details are on my home page and institutional home page. I am also the current president of OCNS... Read More →
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P123: Innovative Strategies to Balance Speed and Accuracy in P300-ERP Detection for Enhanced Online Brain-Computer Interfaces
Monday July 7, 2025 16:20 - 18:20 CEST
P123 Innovative Strategies to Balance Speed and Accuracy in P300-ERP Detection for Enhanced Online Brain-Computer Interfaces

Javier Jiménez*1, Francisco B. Rodríguez1

1Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain
*Email: javier.jimenez01@uam.es

Introduction

Brain-Computer Interfaces (BCIs) interpret signals the brain generates to control devices. These signals can be related to known Event-Related Potentials (ERPs) registered with neuroimaging methods such as electroencephalography [1]. However, ERP detection requires many trials due to its low signal-to-noise ratio [2]. This detection method leads to the well-known speed-accuracy trade-off [3], as each trial adds to the time required for evoking ERPs. We propose a methodology for analyzing this trade-off using two new measures to find the best number of trials for the required accuracy. Finally, these measures were assessed using a P300-ERP dataset [4] to explore their potential as additional early-stopping methods in future online BCI setups.
Methods
In the literature, the speed-accuracy trade-off is usually studied by the employment of BCI measures such as the Information-Transfer Rate (ITR) [5]. However, these measures combine speed and accuracy within a single measure hindering BCI’s evaluation of such speed and accuracy separately. Considering these two concepts may be of interest to BCI users who would be able to decide whether they prefer a fast or accurate BCI in different scenarios. This work introduces two measures called Gain and Conservation which consider the amount of saved time and preserved accuracy, respectively, against a baseline BCI employing a Bayesian Linear Discriminant Analysis (BLDA) classifier to detect P300s.
Results
The new measures were tested against Hoffmann et. al. dataset [4] employing a BLDA classifier to detect P300s in combination with a traditional fixed-stop strategy based upon stopping after a fixed number of trials to evaluate the speed and accuracy of BCIs. For this paradigm, the expected behaviour of these measures would be to follow the speed-accuracy trade-off i.e. faster BCIs would correspond with inaccurate BCIs and vice-versa. This is because faster BCIs employ fewer trials and therefore have access to less information leading to worse P300 detection performances. Such behaviour can be seen in (Fig. 1) where the speed and accuracy of a BCI are represented by the Gain and Conservation, respectively.
Discussion
The described framework proposes two measures capable of evaluating BCIs’ speed and accuracy separately in contrast with other measures such as the ITR [5]. With these new measures, designers and users are provided with a controllable way to optimize BCIs towards different goals prioritizing one measurement over the other under demand. Furthermore, employing these measures offers detailed insights into the behaviors of different BCIs and early-stopping strategies [3] among other applications. To conclude, these measurements can be tracked during the BCI operation, which represents a key future direction of this work:leveraging the speed-accuracy trade-off of BCIs online.



Figure 1. Figure 1: Pseudo-online evolution along different trials from Hoffmann et. al. [4] of normalized Gain and Conservation measures for a fixed-stop strategy compared against its ITR.
Acknowledgements
This work was supported by the Predoctoral Research Grants of the Universidad Autónoma de Madrid (FPI-UAM) and by PID2023-149669NB-I00 (MCIN/AEI and ERDF – “A way of making Europe”).
References
1. 10.1016/0013-4694(88)90149-6
2. 10.1016/j.neuroimage.2010.06.048
3. 10.1088/1741-2560/10/3/036025
4. 10.1016/j.jneumeth.2007.03.005
5. 10.1016/S1388-2457(02)00057-3
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P124: Computational Prediction and Empirical Validation of Enhanced LTP Effects with Gentle iTBS Protocols
Monday July 7, 2025 16:20 - 18:20 CEST
P124 Computational Prediction and Empirical Validation of Enhanced LTP Effects with Gentle iTBS Protocols

Kevin Kadak*1,2, Davide Momi1, Zheng Wang1, Sorenza P. Bastiaens1,2, Mohammad P. Oveisi1,3, Taha Morshedzadeh1,2, Minarose Ismail1,4, Jan Fousek5, and John D. Griffiths1,2,6
1Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto.
2Institute of Medical Sciences, University of Toronto
3Institute of Biomaterials and Biomedical Engineering, University of Toronto
4Department of Physiology, University of Toronto
5Central European Institute of Technology, Czech Republic
6Department of Psychiatry, University of Toronto
*Email: kevin.kadak@mail.utoronto.ca
Introduction

TMS is an established neuromodulatory technique for inducing and assessing cortical excitability changes. Intermittent theta-burst stimulation (iTBS), mimicking endogenous neural activity, yields clinical efficacy comparable to traditional protocols but with significantly shorter treatment durations [1,2]. Despite widespread use for depression treatments, iTBS suffers from high inter-subject response variability. We developed a computational model integrating calcium-dependent plasticity within corticothalamic circuitry to predict optimal iTBS parameters, subsequently validating these predictions through empirical testing of motor-evoked potentials (MEPs) across novel and canonical protocols.

Methods
Our computational model simulated iTBS-induced plasticity effects following 600 pulses in corticothalamic circuitry by varying pulse-burst ratios and inter-burst frequency parameters. We then conducted a mixed-measure experimental paradigm testing standard (Protocol A) and four novel iTBS protocols (B-E; 2-5 pulse-burst, 3-7 Hz). MEPs were recorded pre-stimulation (PRE) and post-stimulation (POST1, POST2) to assess induced plasticity effects. Mixed-effects modelling was performed to analyze group-level effects and response rates.
Results
Our model predicted that gentler stimulation protocols characterized by lower pulse-burst ratios and targeted inter-burst frequencies would maximize long-term potentiation (LTP) effects while reducing response variance. Empirical results confirmed these predictions, with Protocol C (3 pulses/burst, 3 Hz) capturing the highest response rate (60% vs 47% for standard iTBS) and Protocol B (2 pulses/burst, 5 Hz) driving the strongest LTP effects among responders. Notably, protocols with frequencies aligned to participants' alpha subharmonics further modulated plasticity effects in Protocol B, while higher-frequency protocols (Protocol D, 7 Hz) initially induced LTD, which later inverted to LTP.
Discussion
Our findings demonstrate that gentler protocols outperform standard iTBS in driving consistent LTP effects, with efficacy further modulated by resonance between stimulation frequency and endogenous alpha subharmonics. This research highlights an important mechanistic basis for induced plasticity effects pertaining to protocol intensity whereby lower intensity protocols appear to better engage neuroplasticity mechanisms and mitigate metaplastic saturation. We provide a mechanistic framework and empirical validation for enhancing LTP protocols and improving clinical outcomes in TMS treatments for neuropsychiatric disorders.



Acknowledgements
N/A
References

doi: 10.1016/j.biopsych.2007.01.018

doi: 10.1016/S0140-6736(18)30295-2




Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P125: Modeling the biophysics of computation in the outer plexiform layer of the mouse retina
Monday July 7, 2025 16:20 - 18:20 CEST
P125 Modeling the biophysics of computation in the outer plexiform layer of the mouse retina

Kyra L. Kadhim*1, 2, Ziwei Huang1, Michael Deistler2, 3, Jonas Beck1, 2, Jakob H. Macke1, 2, 3, Thomas Euler4, Philipp Berens1, 2


1Hertie Institute for AI in Brain Health, University of Tübingen,Tübingen,Germany
2Tübingen AI Center,University of Tübingen,Tübingen, Germany
3Machine Learning in Science, University of Tübingen,Tübingen, Germany
4Institute for Ophthalmic Research, University of Tübingen,Tübingen,Germany

*Email: kyra.kadhim@uni-tuebingen.de
Introduction

The outer retina is a complex system of neurons that processes visual information, and it is experimentally accessible for the collection of multimodal data. What makes this system complex and nonlinear are mechanisms such as the phototransduction cascade, specialized ion channels, ephaptic feedback mechanisms, and the ribbon synapse [1]. These mechanisms are not typically included in network models that either fit neural data or perform tasks. In particular, optimizing the parameters of such models is computationally challenging with current modelling approaches, which do not include gradient-based optimization methods. However, ignoring such mechanisms limits the ability to capture the computations performed by the retina.
Methods
We developed a fully-differentiable, biophysically-detailed model of the outer plexiform layer of the mouse retina and optimized its parameters with gradient descent. We implemented our model using the new software library Jaxley [2] which inherits functionality from the state of the art machine learning library JAX. In our model, we have so far implemented the phototransduction cascade [3], ion channels [4], and ribbon synapse [5], and we fit their parameters to electrophysiology and neurotransmitter release data. We then optimized the synaptic conductances of the model to classify images with different contrast levels and global luminance levels and analyzed the trained parameter parameter distributions.
Results
We successfully trained our model of a single photoreceptor with gradient descent and found phototransduction cascade parameters that fit the electrophysiology data from Chen and colleagues [3], as well as parameters of the ribbon synapse model that fit glutamate release data from Szatko, Korympidou, and colleagues [6]. We then built a network of photoreceptors with these trained parameters and a horizontal cell, and we trained the network’s 200 synaptic conductances to classify images in which contrast and global luminance levels were distorted. The model was able to classify these images despite these distortions, providing further evidence that the structure of the outer retina facilitates contrast normalization.
Discussion
Biophysical models are capable of implementing a variety of computations that are often attributed to larger neural networks higher in the sensory processing hierarchy. For instance, the fitted model of the phototransduction cascade enables a layer of photoreceptors to adapt to drastically different global luminance levels [3] while at the same time regulating glutamate release consistent with data. Our model, fit to multimodal data, can also classify images with different contrasts using very few trainable parameters. This small but biophysically-inspired network may support many other computations as well, broadening our appreciation of the outer retina.



Acknowledgements
Hertie Stiftung, DFG, ERC Starting Grants NextMechMod and DeepCoMechTome)
References
● https://doi.org/10.1016/C2019-0-00467-0
● https://doi.org/10.1101/2024.08.21.608979
● https://doi.org/10.7554/eLife.93795.1
● https://doi.org/10.1016/j.visres.2009.03.003
● https://doi.org/10.7554/eLife.67851
● https://doi.org/10.1038/s41467-020-17113-8


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P126: Predictive Coding in the Drosphila Optic Lobe
Monday July 7, 2025 16:20 - 18:20 CEST
P126 Predictive Coding in the Drosphila Optic Lobe

Rintaro Kai*1, Naoya Nishiura*1, Keisuke Toyoda*1, Masataka Watanabe*1
1The University of Tokyo
Introduction

In recent years, the complete connectome of the fruit fly has been revealed [1], and its estimation of synaptic efficacy via backprogation training has lead to the reconstruction of T4/T5 motion selective cells [2]. However, in this particular study [2], biologically non-available optical flow was provided as a vector teaching signal. In this study, we used the complete connectome of the fruit fly and implemented Predictive Coding [3] by calculating the error between two tightly coupled cells,namely, L1 and C3. The results demonstrate the potential of training the full connectome neural circtuitry only using biological available vector teaching signals, say, the sensory input itself.
Methods
From the FlyWire dataset, we extracted connectivity information for neurons and 2,700,000 synapses in the right optic lobe and created a single-layer RNN and the neurotransmitters present at each synapse. The output function of each neuron was clipped, and the weights were normalized per post-neuron. Photoreceptor neurons received simulated natural video stimuli based on the shape of the fruit fly's eyes, then stimuli propagated to next neurons with each timestep. The network was trained using the mean squared error of outputs from anatomically close L1 and C3 neurons, creating a simple autoencoder using Predictive Coding [3]. Additionally, the activity of neurons at each timestep was visualized to ensure appropriate behavior.
Results
The learning of the task was successful, and the error converged to a very low value. Neurons other than those used for error calculation also showed appropriate activity, indicating that the network functioned effectively as a whole. Parameters were tuned effectively for the modeling settings, as phenomena where neuron outputs become constant regardless of input were also observed depending on the parameters.

Discussion
The results of this study show that it is possible to perform unsupervised learning on the full connectome by taking errors between pairs of neurons, without incorporating artificial neurons or circuits.Future prospects include verifying whether the neuronal activity of the obtained model is biologically valid, examining the biological significance of hyperparameters, and testing whether network behavior and neuron role distribution can be robustly replicated compared to random initialization.



Acknowledgements
This work has been supported by the Mohammed bin Salman Center for Future Science and Technology for Saudi-Japan Vision 2030 at The University of Tokyo (MbSC2030) and JSPS KAKENHI Grant Number 23K25257.
References
[1] Dorkenwald, Sven et al. (2024). Neuronal wiring diagram of an adult brain. Nature, 634(8032), 124-138.
[2] Lappalainen, Janne K. et al. (2024). Connectome-constrained networks predict neural activity across the fly visual system. Nature, 634(8036), 1132-1140.
[3] Rao, Rajesh P. N. & Ballard, Dana H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79-87.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P127: Disentangling Temporal and Amplitude-Driven Contributions to Signal Complexity
Monday July 7, 2025 16:20 - 18:20 CEST
P127 Disentangling Temporal and Amplitude-Driven Contributions to Signal Complexity

Sara Kamali¹,Fabiano Baroni¹, Pablo Varona¹


¹ Department of Computer Engineering, Autonomous University of Madrid, Madrid, Spain


*Email: sara.kamali@uam.es
Introduction

Quantifying complexity in biomedical signals is crucial for physiological and pathological analysis. Entropy-based methods, like Shannon [1], approximate entropy [2], and sample entropy (SampEn) [3] quantify unpredictability. Some approaches, including increment-based methods [4, 5], capture entropies from amplitude variations. Existing methods, however, do not distinguish complexity derived from temporal dynamics versus amplitude fluctuations. This limitation restricts insights into the dynamical evolution of signals We introduce Extrema-Segmented Entropy (ExSEnt), an entropy-based framework that independently analyzes temporal and amplitude components, enhancing understanding of underlying dynamics.
Methods
We segmented the time series based on extrema, each segment starts at the data point after the current extremum and ends at the next extremum. Two key features were extracted per segment: duration, representing the temporal length, and net amplitude, reflecting the overall signal variation. We then computed SampEn for each feature separately, as well as their joint bivariant entropy, to assess whether they provide independent or correlated information. This approach helps determine whether complexity arises primarily from temporal dynamics or amplitude variations. Our method enhances the understanding of how different factors drive signal complexity.
Results
Application of ExSEnt on synthetic data revealed the ability of the metrics to distin- guish between different random signals, i.e., Gaussian noise, pink noise, and Brownian motion. We also evaluated the complexity of well-known dynamical systems, such as the Rulkov neuron model, where ExSEnt successfully differentiated between different dy- namical regimes. Evaluation of electromyography (EMG) signals during a motor task revealed that movement intervals exhibit lower amplitude complexity but relatively sta- ble temporal entropy compared to the baseline. A strong linear correlation was observed between amplitude ExSEnt and joint ExSEnt, suggesting that amplitude variations are the primary contributors to the joint amplitude-temporal EMG complexity.
Discussion
The ExSEnt framework offers a precise and systematic approach to quantifying tempo- ral and amplitude-driven contributions to complexity, providing a novel perspective for biomedical and neuronal signal analysis. Applying ExSEnt to neural data demonstrates its potential to reveal hidden dependencies between duration and amplitude fluctuations, pro- viding a detailed complexity profile. This approach aids in quantifying dynamic changes and identifying complexity sources in neural disorders and physiological states.




Acknowledgements
Work funded by PID2024-155923NB-I00, CPP2023-010818, and PID2021-122347NB-I00.
References
[1] https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
[2] https://doi.org/10.1073/pnas.88.6.2297
[3] https://doi.org/10.1016/S0076-6879(04)84011-4
[4] https://doi.org/10.3390/e20030210
[5] https://doi.org/10.3390/e18010022
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P128: Electrodiffusion and voltage dynamics in the periaxonal space with spatially detailed finite-element simulations
Monday July 7, 2025 16:20 - 18:20 CEST
P128 Electrodiffusion and voltage dynamics in the periaxonal space with spatially detailed finite-element simulations

Tim M. Kamsma*1,2, R. van Roij1, Maarten H.P. Kole3,4

1Institute for Theoretical Physics, Utrecht University, Utrecht, The Netherlands
2Mathematical Institute, Utrecht University, Utrecht, The Netherlands
3Department of Axonal Signalling, Netherlands Institute for Neuroscience, an Institute of the Royal Netherlands Academy of Arts and Sciences, Amsterdam, The Netherlands
4Cell biology, Neurobiology and Biophysics, Department of Biology, Faculty of Science, Utrecht University, Utrecht, The Netherlands


*Email: t.m.kamsma@uu.nl
Introduction

The submyelin, or periaxonal, space was long considered to be an inert region of the internode. This view has been revised over recent years, as evidence accumulated that the periaxonal region plays important roles in both the electrical saltatory conduction of the action potential [1] and in chemical axo-myelinic cell signalling [2]. The nanoscale dimensions of the periaxonal space makes experimental investigations into the electrochemical dynamics extremely challenging. Traditional cable theory models, though informative for electrical properties [1], do not provide the spatial resolution nor the appropriate ionic transport equations to resolve the complex electrodiffusion profiles inherent to such highly confined geometries.


Methods
To investigate the electrochemical dynamics of axon-myelin spaces, we developed a computational model that employs detailed finite-element simulations to numerically resolve first-principles ion transport equations within a biologically accurate geometry of a myelinated axon. Membrane currents were implemented through standard Hodgkin-Huxley-like voltage-dependent ion channel equations, while outside of the membrane all concentrations and voltages were fully governed by the Poisson-Nernst-Planck equations. These coupled physical equations were numerically solved with the software package COMSOL. The results were compared to traditional simulations using a double-cable model of the NEURON software package.

Results
Our computational model autonomously generated biophysically accurate action potentials and spatially resolved all ionic and voltage dynamics. Without clearance mechanisms, periaxonal potassium accumulation of up to ~10 mM was predicted for a single action potential. Consequently, we investigated and revealed possible potassium clearance pathways via the oligodendrocyte-myelin complex. More generally, as all physical quantities are fully resolved with high spatial resolution, this model can flexibly provide other desired insights within the entire modelled geometry. Furthermore, molecular transport, chemical reactions, and fluid flow can be coupled to the same model, which therefore can serve as a versatile platform for future expansions.

Discussion
Although our simulations can probe regions that are experimentally difficult to access, the model still required parameter inputs and is therefore also constrained by the limited experimental data. Future simulations and biological 3D EM data will need to advance in tandem to fully investigate the dynamics in this region. The geometry of the model assumed a rotational symmetry, which considerably simplified the model, but is not entirely biologically accurate. Lastly, we did not resolve the physics within membranes, as this requires molecular scale simulations. However, since phenomenological Hodgkin-Huxley-like membrane current equations are well-tested, we expect that the modelled ionic fluxes are quantitatively accurate.





Acknowledgements
This work was supported by the Science for Sustainability Graduate Programme of Utrecht University.
References
1.Cohen, C. C., Popovic, M. A., Klooster, J., Weil, M. T., Möbius, W., Nave, K. A., & Kole, M. H. (2020).Saltatory conduction along myelinated axons involves a periaxonal nanocircuit.Cell,180(2), 311-322.https://doi.org/10.1016/j.cell.2019.11.039

2.Micu, I., Plemel, J. R., Caprariello, A. V., Nave, K. A., & Stys, P. K. (2018).Axo-myelinic neurotransmission: a novel mode of cell signalling in the central nervous system.Nature Reviews Neuroscience,19(1), 49-58.https://doi.org/10.1038/nrn.2017.128
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P129: Of mice and men: Dendritic architecture differentiates human from mice neuronal networks
Monday July 7, 2025 16:20 - 18:20 CEST
P129 Of mice and men: Dendritic architecture differentiates human from mice neuronal networks

Lida Kanari∗1, Ying Shi1,5, Alexis Arnaudon1, Natalı Barros-Zulaica1, Ruth Benavides-Piccione2, Jay S. Coggan1, Javier DeFelipe2, Kathryn Hess3, Huib D. Mansvelder4, Eline J. Mertens4, Julie Meystre5, Rodrigo de Campos Perin5, Maurizio Pezzoli5, Roy T. Daniel6, Ron Stoop7, Idan Segev8, Henry Markram1and Christiaan P.J. de Kock4

1Blue Brain Project, Ecole Polytechnique Federale de Lausanne (EPFL), Geneva, Switzerland.
2Laboratorio Cajal de Circuitos Corticales, Universidad Politecnica de Madrid and Instituto Cajal (CSIC), Madrid, Spain
3Laboratory for Topology and Neuroscience, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
4Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
5Laboratory of Neural Microcircuitry, ´Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
6Department of Clinical Neurosciences, Neurosurgery Unit, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland
7Center for Psychiatric Neurosciences, Department of Psychiatry, Lausanne University Hospital Center, Lausanne, Switzerland
8Department of Neurobiology and Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel


* Email: lida.kanari@gmail.com

Introduction
The organizational principles that distinguish the human brain from other species have been a long-standing enigma in neuroscience. Numerous studies have investigated the correlations between intelligence and neuronal density [1], cortical thickness [2], gyrification [3], and dendritic architecture [4]. However, despite extensive endeavors to unravel its mysteries, numerous aspects of our unique characteristics remain elusive. Along several other factors that contribute in human intelligence, in this study [5] we demonstrate that the shapes of dendrites are an important indicator of network complexity that cannot be disregarded in our quest to identify what makes us human.


Results
Using experimental pyramidal cell reconstructions [6], we built representative mouse and human cortical networks (Fig. 1). We integrate experimental data, taking into account the lower cell density in human cortex layers 2 and 3 [7, 8] and the greater interneuron percentages in the human cortex [9]. Human pyramidal cells form highly complex networks (Fig. 1C), demonstrated by the increased number and simplex dimension compared to mice. Simple dendritic scaling cannot explain species-specific connectivity differences. Topological comparison of dendritic structure reveals much higher perisomatic (basal and oblique) branching density in human pyramidal cells (Fig1. D), impacting network complexity.

Methods
The Topological Morphology Descriptor [10] represents the neuronal morphology as a persistence barcode, using topological data analysis to characterize the shapes of neurons. Scaling transformations were analyzed to compare mouse and human neurons, with optimization via gradient descent. The connectivity was computed using computational modeling of the cortical layers 2 and 3 [11], and approximates the set of potential connections in mouse and human cortex. Memory capacity was analyzed based on dendritic processing models [12].

Discussion
Despite lower neuronal density, human pyramidal cells establish higher-order interactions via their distinct dendritic topology, forming complex networks. This enhanced connectivity is supported by interneurons, which maintain excitation-inhibition balance. The increased dendritic complexity of human pyramidal cells correlates with increased memory capacity, suggesting its role in computational efficiency. Rather than increasing neuron count, human brains prioritize single-neuron complexity to optimize network function. Our findings highlight dendritic morphology as a key determinant of network performance, shaping cognition and future research directions.



Figure 1. Fig1: Multiscale comparison of mouse and human brains, from brain regions to single neurons (A). Greater network complexity (C) emerges in human networks despite the lower neuron density (B), correlating with the higher dendritic complexity of human pyramidal cells. Our findings suggest that dendritic complexity (D) is more substantial for network complexity than neuron density.
Acknowledgements
BBP, EPFL, by ETH Board by SFIT. H.D.M. and C.d.K. by grant awards U01MH114812, UM1MH130981-01 from NIMH, grant no. 945539 (HBP SGA3) Horizon 2020 Framework, NWO 024.004.012, ENW-M2, OCENW.M20.285. R.S. by SNSF (IZLSZ3\_148803, IZLIZ3\_200297, IZLCZ0_206045, 31003A_138526) and Synapsis Foundation (2020-PI02). J.D.F. and R.B.P. by PID2021-127924NB-I00(MCIN/AEI/10.13039/501100011033).


References
[1]https://doi.org/10.3389/neuro.09.031.2009
[2]https://doi.org/10.1016/j.intell.2013.07.010
[3]http://dx.doi.org/10.1016/j.cub.2016.03.021
[4]https://doi.org/10.1016/j.tics.2022.08.012
[5]https://doi.org/10.1101/2023.09.11.557170
[6]https://doi.org/10.1093/cercor/bhv188
[7]https://doi.org/10.1023/A:1024130211265
[8]https://doi.org/10.1023/a:1024134312173
[9]https://doi.org/10.1126/science.abo0924
[10]https://doi.org/10.1007/s12021-017-9341-1
[11]https://doi.org/10.1016/j.cell.2015.09.029
[12]https://doi.org/10.1016/S0896-6273(01)00252-5


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P130: Manifold Inference by Maximising Information: Hypothesis-driven extraction of CA1 neural manifolds via information theory
Monday July 7, 2025 16:20 - 18:20 CEST
P130 Manifold Inference by Maximising Information: Hypothesis-driven extraction of CA1 neural manifolds via information theory

Michael G. Kareithi*1, Mary Ann Go 1, Pier Luigi Dragotti 2 Simon R. Schultz1

1 Department of Bioengineering, Imperial College London, London, United Kingdom
2Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom

*Email: m.kareithi21@imperial.ac.uk

Introduction
Neural manifolds have been a useful concept for understanding cognition, with recent work showing the importance of "hypothesis-driven" analyses: linking behaviour with manifolds via supervised manifold learning [1]. Linear dimensionality reduction methods are easier to interpret than their nonlinear counterparts, but often can only detect linear correlations in neural activity. From an information-theory perspective, a natural approach to supervised manifolds is to maximise Mutual Information between the embedding and the target variable. We use simple linear embeddings with an information-theoretic objective function: Quadratic Mutual Information [2], and apply it as a tool for hypothesis-driven manifold learning in mouse hippocampus.
Methods
Quadratic Mutual Information (QMI) is derived from Renyi entropy and divergence, a broader family of measures than Shannon entropy and mutual information (MI), the latter being a special case of the former. Like MI, QMI has the desirable property of being zero if and only if the variables are independent. Its advantage is that it can be estimated with high-dimensional data, and is differentiable. We fit a linear projection from activity to a lower-dimensional subspace by maximising QMI between the projection and a target variable. We call our framework Manifold Inference by Maximising Information (MIMI). We apply MIMI to two-photon calcium recordings in mouse CA1 during a 1D running task.
Results
In our dataset, mice run continuously along a circular track [3]. We fit MIMI on calcium fluorescence activity, with the animal's angular position as target variable, cross-validating with a 75%-25% train-test split. In four out of eight mice we find the majority of position-information in a 2-3 dimensional subspace (Fig 1.a). In the sessions without informative subspaces, the linear decodability of position is low even from the full population activity, indicating the absence of a population code (Fig 1.e). The informative subspaces contain ring-shaped manifolds mapping continuously onto the animal's physical coordinates (Fig 1.f).
Discussion
Combining information-theoretic measures with linear embeddings is a useful idea for analysing populations, where our aim is not only to find manifold structure, but to understand how cell assemblies coordinate to sculpt it. MIMI shows that we can find behaviourally-informative manifolds without nonlinear embeddings: only a nonlinear measure of dependence. Downstream analysis can then pose questions about representation by examining the linear transformation weights: for example, asking if two variables are represented orthogonally. We believe MIMI will be a useful framework for interpretable, hypothesis-driven manifold analysis.




Figure 1. A) Explained position variance (R-squared of ridge-regressor, left) and Mutual Information (right) between position and MIMI projection at different dimensionalities. Each line is an individual mouse. B) Position-variance explained by full population vs by MIMI subspace. C) Activity in MIMI subspace for four mice with informative subspaces, coloured by associated position of mouse.
Acknowledgements-
References
1.https://doi.org/10.1038/s41586-023-06031-6
2.https://doi.org/10.1007/978-1-4419-1570-2_2
3.https://doi.org/10.3389/fncel.2021.618658
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P131: Personalized Computational Models for Selective and Impact-Driven Brain Stimulation
Monday July 7, 2025 16:20 - 18:20 CEST
P131 Personalized Computational Models for Selective and Impact-Driven Brain Stimulation

Fariba Karimi*1,2,Taylor Newton1,Melanie Steiner1, Antonino Cassara1,Niels Kuster1,2 ,Esra Neufeld1



1IT’IS Foundation, Zürich, Switzerland
2Swiss Federal Institute of Technology (ETH Zurich), Zürich, Switzerland
3Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva and Sion, Switzerland
4Clinical Neuroscience, University Medical School of Geneva, Geneva, Switzerland


*Email: karimi@itis.swiss

Introduction

Non-invasive brain stimulation (NIBS) offers promising therapeutic avenues for a range of neurological conditions. However, inter-subject response variability remains an important challenge, often limiting its widespread clinical adoption. Here, we present a computational pipeline designed to optimize NIBS by harnessing personalized brain network dynamics modeling, towards enhancing both effictivity and predictability of therapeutic outcomes.

Methods
We developed a comprehensive pipeline on the o2S2PARC platform (see Fig. 1). The pipeline utilizes MRI and diffusion-weighted imaging (DWI) data to construct detailed head models (>40 distinct tissue types) through AI segmentation, perform electromagnetic (EM) simulations to determine exposure-induced electric fields and personalized lead field matrices, and predict the impact of diverse stimulation conditions on brain network dynamics using personalized neural mass models (NMMs; derived from DWI structural connectivity data; simulated using the The Virtual Brain (TVB) [1] framework). The brain network modeling combined with the personalized lead fields permit to synthetize virtual EEG signals that can be compared with measurable data.
Results
Using the developed pipeline, we implemented a temporal interference stimulation planning (TIP) tool for optimizing electrode locations for temporal interference stimulation (TIS, a recently introduced transcranial electric stimulation method capable of targeted stimulation at depth). Demonstration applications of our pipeline predicted shifts in EEG spectral responses following transcranial alternating current stimulation (tACS) in accordance with theoretical and empirical data. Additionally, our simulations revealed dynamic fluctuations of inter-hemispheric synchronization in accordance with experimental observations. These results underscore our pipeline's potential in modeling real-world brain responses to NIBS [3].

Discussion
We established a fully automated computational pipeline for personalized NIBS modeling and the optimization of dynamic brain network response predictions. This pipeline underscores the shift from generic exposure-targeting approaches to a personalized, impact-driven (network dynamics) approach, towards improving the efficacy and precision of NIBS therapies. Current research focuses on the continuous inference of improved model parameters based on measurement feedback and model-predictive control. This works lays the groundwork for adaptive and effective brain dynamics modulation for the treatment of complex neurological disorders, marking a significant advance in the personalized medicine landscape [3].






Figure 1. Figure 1: Schematic representation of the developed pipeline on the o2S2PARC platform
Acknowledgements
--
References
1.https://doi.org/10.1016/j.neuroimage.2015.01.002
2.https://doi.org/10.1109/TNSRE.2012.2200046
3.https://doi.org/10.1088/1741-2552/adb88f
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P132: Brain Symphony: A Transformer-Driven Fusion of fMRI Time Series and Structural Connectivity
Monday July 7, 2025 16:20 - 18:20 CEST
P132 Brain Symphony: A Transformer-Driven Fusion of fMRI Time Series and Structural Connectivity

Moein Khajehnejad*1,2, Adeel Razi1,2,3

1Turner Institute for Brain & Mental Health, Monash University, Melbourne, Australia
2Monash Data Futures Institute, Monash University, Melbourne, Australia
3Wellcome Centre for Human Neuroimaging, University College London, United Kingdom

*Email: moein.khajehnejad@monash.edu


Introduction
Understanding brain function requires integrating multimodal neuroimaging data to capture temporal dynamics and pairwise interactions. We propose a novel foundation model fusing fMRI time series, structural connectivity, and effective connectivity graphs using Dynamic Causal Modeling (DCM) [1] to derive robust, interpretable region-of-interest (ROI) embeddings.Our approach enables robust representation learning that generalizes across datasets and supports downstream tasks such as disease classification or detecting neural alterations induced by psychedelics. Additionally, our model identifies most influential brain regions and time intervals, facilitating interpretability in neuroscience applications.
Methods


Our framework employs two self-supervised encoders. The fMRI encoder utilizes a Spatio-Temporal Transformer to model dynamic ROI embeddings. The connectivity encoder incorporates a Graph Transformer [2] and systematically evaluates multiple advanced graph-based approaches—signed Graph Neural Networks [3], Graph Attention Networks with edge sign awareness [4] and Message Passing Neural Networks with edge-type features [5]—to determine the most effective strategy for capturing excitatory and inhibitory connections for the DCM-derived graphs. To preserve causal semantics, we compare and adapt sign-aware attention and positional encodings using signed Laplacian, random walk differences, and global relational encodings, selecting the most suitable method based on empirical performance. Cross-modal attention integrates the learned embeddings from both encoders, ensuring seamless fusion across modalities. The model is pretrained on the HCP dataset, utilizing both fMRI time series and structural connectivity, and remains adaptable for other datasets incorporating different connectivity measures.
Results
We pretrained the model on 900 HCP participants, testing it on 67 held-out subjects and an independent psilocybin dataset (54 participants) [6]. Fig. 1.a shows accurately reconstructed fMRI time series for a test subject. Fig. 1.b presents reconstructed functional and structural connectivity maps, capturing both dynamic and anatomical relationships. Fig. 1.c visualizes low-dimensional ROI embeddings before and after psilocybin administration, revealing clear shifts only in subjects with high subjective effects (i.e. MEQ scores), indicating the model's ability to capture neural alterations. This dataset was not part of pretraining, emphasizing strong transferability and generalizability.



Discussion
This scalable, interpretable framework advances multimodal integration of fMRI and distinct connectivity representations, enhancing classification and causal insight. Future work will compare diffusion-based structural connectivity with DCM-derived effective connectivity to assess the impact of causal representations on robustness in noisy datasets with latent confounders.






Figure 1. Reconstruction and representation capabilities of the multimodal foundation model. (a) Reconstructed fMRI time series for a test subject, demonstrating model accuracy. (b) Reconstructed functional and structural connectivity maps, capturing dynamic and anatomical relationships. (c) Low-dimensional ROI representations before and after psilocybin with greater shifts in high MEQ subjects.
Acknowledgements
A.R. is affiliated with The Wellcome Centre for Human Neuroimaging, supported by core funding from Wellcome [203147/Z/16/Z]. A.R. is a CIFAR Azrieli Global Scholar in the Brain, Mind & Consciousness Program.
References
[1]https://doi.org/10.1016/S1053-8119(03)00202-7
[2]https://doi.org/10.48550/arXiv.2106.05234
[3]https://doi.org/10.1109/ICDM.2018.00113
[4]https://doi.org/10.48550/arXiv.1710.10903
[5]https://doi.org/10.48550/arXiv.1704.01212
[6]https://doi.org/10.1101/2025.03.09.642197
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P133: A Recursive Stability Model of Qualia: Philosophical Self-reference, Neural Attractor Structures, and Experimental Exploration in LLMs
Monday July 7, 2025 16:20 - 18:20 CEST
P133 A Recursive Stability Model of Qualia: Philosophical Self-reference, Neural Attractor Structures, and Experimental Exploration in LLMs

Chang-Eop Kim
Department of Physiology, College of Korean Medicine, Gachon University, 1342, Seong- namdaero, Seongnam 13120, Republic of Korea

Email:eopchang@gachon.ac.kr

Introduction

Qualia represent a fundamental challenge in consciousness research, defined as inherently subjective experiences that resist objective characterization. Philosophically, qualia have been proposed to possess self-referential characteristics, aligning conceptually with Douglas Hofstadter’s "strange loop" theory, which suggests subjective experience might arise from recursive structures [1]. However, explicit mathematical and empirical formulations of this concept remain scarce.

Methods
We developed a mathematical formalization of qualia using recursive stability, identifying fixed-point states reflecting neural circuits recursively referencing their outputs. Neuroscientific literature was reviewed to identify biological phenomena potentially implementing recursive stability. Additionally, analogous candidate structures were explored within artificial neural networks, particularly focusing on attention mechanisms in large language models (LLMs).
Results
The mathematical formulation effectively captured essential characteristics of subjective conscious experiences, including their inherent immediacy and the necessary equivalence between existence and self-awareness. Neuroscientific literature suggested candidate biological structures, such as hippocampal CA3 attractor networks indirectly supporting self-referential episodic memory, and sustained-activity circuits in prefrontal cortex known for roles in conscious cognition [2,3]. At the cellular level, basic biological feedback loops provided foundational examples of recursive mechanisms. Computationally, Hopfield network-like structures, explicitly self-referential and analogous to Hofstadter's "strange loop," were identified in the attention mechanisms of LLMs, indicating potential attractor-like behaviors and recursive self-reference within these models.

Discussion
This research supports recursive stability as a robust mathematical framework bridging philosophical, neuroscientific, and computational perspectives on qualia. Computational findings suggest LLMs as practical platforms for experimentally exploring self-referential consciousness models. Future research should empirically validate these recursive structures within biological systems and further refine computational implementations to deepen our understanding of consciousness.





Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(RS-2024-00339889).

References
[1] Hofstadter, D. R. (2007). I Am a Strange Loop. Basic Books. ISBN: 978-0465030798.
[2] Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486-492.https://doi.org/10.1126/science.aan8871
[3] Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776-798. https://doi.org/10.1016/j.neuron.2020.01.026
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P134: Sensory Data Observation is not an Instant Mapping
Monday July 7, 2025 16:20 - 18:20 CEST
P134 Sensory Data Observation is not an Instant Mapping

Chang Sub Kim*

Department of Physics, Chonnam National University, Gwangju 61186, Republic of Korea

*Email: cskim@jnu.ac.kr

Introduction

The brain self-supervises its embodied agent's behavior via perception, learning, and planning action. Researchers have lately accommodated computational algorithms such as error backpropagation [1] and graphical models [2] to enhance our understanding of how the brain works. The accommodated approaches suit reverse-engineering problems but may not account for real brains. This study aims to provide a biologically plausible theory describing sensory generation, synaptic efficacy, and neural activity as all dynamical processes within a physics-attended framework. We address that sensory observation is generally continuous; therefore, one must handle them appropriately, not as an instant mapping prevalent in Kalman filters [3].


Methods
We formulate a neurophysical theory for the brain's working under the free energy principle (FEP), advocating that the brain minimizes informational free energy (IFE) for autopoietic reasons [4]. We derive the Bayesian mechanics (BM) that actuates IFE minimization and numerically show how the BM performs the minimization. To this end, we must determine the likelihood and prior probabilities in the IFE, which are nonequilibrium physical densities in the biological brain. Using stochastic-thermodynamic methods, we specify them as path probabilities and identify variational IFE as a classical action in analytical mechanics [5]. Subsequently, we apply the principle of least action and obtain the brain's neural equations of functional motion.

Results
Our resulting BM governs the co-evolution of the neural state and momentum variables; the momentum variable represents prediction error in the predictive coding framework [6]. Figure 1 depicts a sensory stream observed in continuous time, contrasting with discrete Kalman emission. We have numerically explored static and time-dependent sensory inputs for various cognitive operations such as passive perception, active inference, and learning synaptic weights. Our results reveal optimal trajectories, manifesting the brain's minimization of the IFE in neural phase space. In addition, we will present the neural circuitries implied by the BM, reflecting a network of neural nodes in the generic cortical column.

Discussion
We argued that sensory data generation is a dynamical process, which we incorporated into our formulation for IFE minimization. Our minimization procedure does not invoke the gradient descent (GD) methods in conventional neural networks but arises naturally from the Hamilton principle. In contrast to quasistatic GD updating, our approach can handle fast, time-varying sensory inputs and provides continuous trajectories of least action, optimizing noisy neuronal dynamics. Furthermore, our theory resolved the issue of the lack of local error representation by revealing the momentum variable as representing local prediction error; we also uncovered its neural equations of motion.





Figure 1. Schematic of sensory data observation. The sensory stream is generally continuous, as depicted in the blue noise curve; the neural response is drawn as the red trajectory, retrodicting the sensory causes in continuous time. In contrast, the prevailing Bayesian filtering in the literature handles sensory observation as a discrete mapping delineated by vertical dashed arrows.
Acknowledgements
Not applicable.
References
● https://doi.org/10.1016/j.tics.2018.12.005
● https://doi.org/10.1016/j.jmp.2021.102632
● http://dx.doi.org/10.1115/1.3662552
● https://doi.org/10.1038/nrn2787
● Landau, L. D., & Lifshitz, E. M. (1976). Mechanics: Course of theoretical physics. Volume 1. 3rd edition. Amsterdam: Elsevier.
● https://doi.org/10.1038/4580


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P135: Neural dynamics of reversal learning in the prefrontal cortex and recurrent neural networks
Monday July 7, 2025 16:20 - 18:20 CEST
P135 Neural dynamics of reversal learning in the prefrontal cortex and recurrent neural networks

Christopher M. Kim*1, Carson C. Chow1, Bruno B. Averbeck2

1Laboratory of Biological Modeling, NIDDK/NIH, Bethesda, MD
2Laboratory of Neuropsychology, NIMH/NIH, Bethesda, MD
3Current address: Department of Mathematics, Howard University, Washington, DC

*Email: christopher.kim@howard.edu

Introduction

In a probabilistic reversal learning task, a subject learns from initial trials that one of the two options yields reward with higher probability than the other (for instance, the high-value and the low-value options are rewarded 70% and 30% of the time, respectively). When the reward probabilities of two options are reversed at a random trial, the agent must switch its choice preference to maximize reward. Such reversal learning has been used for assessing one’s ability to adapt in a dynamically changing environment with uncertain rewards [1]. In this task, reward outcomes must be integrated over multiple trials before reversing the preferred choice, as the less favorable option yields rewards stochastically.


Methods
We investigated how cortical neurons represent integration of decision-related evidence across trials in the reversal learning task. Previous works considered attractor dynamics along a line in the state space as a neural mechanism for evidence integration [2]. However, when integrating evidence across trials, the subject must perform task-related behaviors within each trial, which could induce non-stationary neural activity. To understand the neural representation of multi-trial evidence accumulation, we analyzed the activity of neurons in the prefrontal cortex of monkeys and recurrent neural networks trained to perform a reversal learning task.
Results
We found that, in a neural subspace encoding reversal probability, its activity represented integration of reward outcomes as in a line attractor. The reversal probability activity at the start of a trial was stationary, stable and consistent with the attractor dynamics. However, during the trial, the activity was associated with task-related behavior and became non-stationary, thus deviating from the line attractor. Fitting a predictive model to neural data showed that the stationary state at the trial start served as an initial condition for launching the non-stationary activity. This suggested an extension of the line attractor model with behavior-induced non-stationary dynamics.
Discussion
Our findings show that, when performing a reversal learning task, a cortical circuit represents reversal probability, not only in stable stationary states as in a line attractor model, but also in dynamic neural trajectories that can accommodate non-stationary task-related behaviors necessary for the task. Such neural mechanism demonstrates the temporal flexibility of cortical computation and opens the opportunity for extending existing neural model for evidence accumulation by augmenting temporal dynamics.




Acknowledgements
This research was supported by the Intramural Research Program of the National Institutes of Health: the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) and the National Institute of Mental Health (NIMH). This work utilized the computational resources of the NIH HPC Biowulf cluster (https://hpc.nih.gov).
References
[1] Bartolo, R., & Averbeck, B. B. (2020). Prefrontal cortex predicts state switches during reversal learning.Neuron,106(6), 1044-1054.
[2] Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex.nature,503(7474), 78-84.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P136: Computational Modeling of Ca2+ Blocker Effect in the Thalamocortical Network of Epilepsy: A Dynamic Causal Modeling Study
Monday July 7, 2025 16:20 - 18:20 CEST
P136 Computational Modeling of Ca2+ Blocker Effect in the Thalamocortical Network of Epilepsy: A Dynamic Causal Modeling Study

Euisun Kim1, Jiyoung Kang2, Jinseok Eo3, Hae-Jeong Park*1,3,4,5

¹Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul, Republic of Korea
²Department of Scientific Computing, Pukyong National University, Busan, Republic of Korea
³Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea
4Department of Cognitive Science, Yonsei University, Seoul, Republic of Korea
5Department of Nuclear Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
*1,2 are equally contributed.

*Email: parkhj@yonsei.ac.kr
Introduction

Childhood Absence Epilepsy (CAE) is characterized by excessive thalamocortical synchronization, leading to recurrent loss of consciousness [1]. This phenomenon is linked to T-type calcium channel hyperactivity, a key driver of seizure generation [2]. Ethosuximide (ETX), a T-type Ca²⁺ blocker and first-line CAE treatment, is expected to influence both intrinsic neural property and interregional connectivity, but its mechanism on thalamocortical network hierarchy remain unclear. This study employs Dynamic Causal Modeling (DCM) to analyze ETX-induced network changes from a neuropharmacological perspective [3].


Methods
To examine ETX-induced changes in thalamocortical dynamics, we incorporated voltage dependent calcium channels to a thalamocortical model (TCM) [4]. The model included six cortical populations (pyramidal, interneuron, and stellate cells) and two thalamic populations (reticular and relay neurons) for a thalamocortical system. Their temporal evolution is governed by coupled differential equations, describing membrane potential and conductance changes mediated by AMPA, NMDA, GABA-A receptors, and T-type calcium channels —the latter capturing ETX effects. Resting-state EEG data were collected before and after ETX administration in CAE patients. Using DCM of longitudinal EEG, we analyzed hierarchical thalamocortical connectivity changes and modeled nonlinear interactions influencing EEG cross-spectral density (CSD) within the Default Mode Network (DMN), including the mPFC, Precuneus, and lateral parietal cortices —which is often aberrantly deactivated during CAE seizures, potentially due to subcortical inhibition [5].
Results
ETX significantly altered both thalamocortical and cortical network dynamics. We observed changes in intrinsic neural properties as well as interregional connectivity when comparing pre- and post-ETX conditions. These findings indicate that ETX modulates local neural excitability and large-scale network interactions, thereby contributing to seizure suppression in CAE.

Discussion
By incorporating voltage-dependent Ca²⁺ channels into a thalamocortical model, this study provides a preliminary computational evidence that calcium channel blockers help restore large-scale network stability in CAE. The results underscore the therapeutic mechanism by which these agents modify pathological thalamocortical interactions. Further validation and refinement of the computational model may enhance clinical approaches to treating CAE and related epileptic disorders.





Acknowledgements
This research was supported by the Bio&Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government (MSIT) (No. RS-2024-00401794).
References
● https://doi.org/10.1016/j.nbd.2023.106094
● https://doi.org/10.1111/epi.13962
● https://doi.org/10.1016/j.neuroimage.2023.120161
● https://doi.org/10.1016/j.neuroimage.2020.117189
● 10.3233/BEN-2011-0310


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P137: Coordinated Multi-frequency Oscillatory Bursts Enable Time-structured Dynamic Information Transfer
Monday July 7, 2025 16:20 - 18:20 CEST
P137 Coordinated Multi-frequency Oscillatory Bursts Enable Time-structured Dynamic Information Transfer

Jung Young Kim*1, Jee Hyun Choi1, Demian Battaglia*2

1Korea Institute of Science and Technology (KIST), Seoul, South Korea
2Functional System Dynamics / LNCA UMR 7364, University of Strasbourg, France

*Email: jungyoungk51@kist.re.kr; dbattaglia@unistra.fr


Introduction
Slower (e.g., beta) and faster (e.g., gamma) oscillatory bursts have been linked to multiplexed neural communication, respectively relaying top-down expectations and bottom-up prediction errors [1,2]. These signals target distinct cortical layers with different dominant frequencies [3]. However, this theory faces challenges: multiplexed routing might not require distinct frequencies [4], and phasic enhancement from slow oscillations may be too sluggish to modulate faster oscillatory processes. What fundamental functional advantage, then, could multi-frequency oscillatory bursting offer?



Methods
We investigate information transfer between two neural circuits (e.g., different cortical layers or regions) generating sparsely synchronized, transient oscillatory bursts with distinct intrinsic frequencies in spiking neural networks [5]. Through a systematic parameter space exploration, guided by unsupervised classification, we uncover a diverse range of Multi-Frequency Oscillatory Patterns (MFOPs). These include configurations in which the populations emit bursts at their natural frequencies, deviating from them, or even at more than one frequency simultaneously or sequentially. We then use transfer entropy [6] between simulated multi-unit activity and analyses of single unit spike transmission to assess functional interactions.

Results
We demonstrate that distinct MFOPs correspond to different Information Routing Patterns (IRPs), dynamically boosting or suppressing transfer in different directions at precise times, forming thus specific temporal graph motifs. Notably, the “slow” population can send information with latencies shorter than a fast oscillation period and also affect multiple faster cycles within a single slow cycle. Supported by precise analyses of the spiking dynamics of synaptically-coupled single neurons, we propose that MFOPs act as complex "attention mechanisms" (in the sense of ANNs) as they provide a controllable way to selectively weight the relevance of different incoming inputs, as a function of their latencies relative to currently emitted spikes.

Discussion
Our findings show that the coexistence and coordination of oscillatory bursts at different frequencies enables rich, temporally-structured choreographies of information exchange, moving well beyond simple multiplexing (one direction = one frequency). The presence of multiple frequencies considerably expands the repertoire of possible space-time information transfer patterns, providing a resource that could be harnessed to support distinct functional computations. Notably, multi-frequency oscillatory bursting could provide a self-organized manner to tag spiking activity with sequential context information, reminiscent of attention masks in transformers or other ANNs.




Figure 1. A) Networks of spiking neurons with "hardwired" slow and fast oscillatory frequencies. B) Because of network interactions, these networks develop MFOPs with different frequency properties bypassing frequency hardwiring. We extract these bursting events (C) and show that they systematically correspond to spatiotemporal motifs of information transfer (D), aka Information Routing Patterns (IRPs)
Acknowledgements
STEAM Global (Korea Global Cooperative Convergence Research Program)
References[1] Bastos, A.M., et al. (2015). Neuron 85, 390. [2] Bastos, A. M., et al. (2020) Proc Natl Ac Sci 117, 31459. [3] Mendoza-Halliday, D., et al. (2024) Nature Neurosci 27, 547. [4] Battaglia, D., et al. (2012). PLoS Comp Biol 8, e1002438. [5] Wang, X.J., and Buzsáki, G.B. (1996). J Neurosci 16, 6402–6413. [6] Palmigiano, A., et al. (2017). Nat Neurosci 20, 1014.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P138: Quantifying harmony between direct and indirect pathways in a spiking neural network of the basal ganglia; healthy and Parkinsonian states
Monday July 7, 2025 16:20 - 18:20 CEST
P138 Quantifying harmony between direct and indirect pathways in a spiking neural network of the basal ganglia; healthy and Parkinsonian states

Sang-Yoon Kim andWoochang Lim*
Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Korea
*Email: wclim@icn.re.kr

The basal ganglia (BG) show a variety of functions for motor and cognition. There are two competitive pathways in the BG; direct pathway (DP) which facilitates movement and indirect pathway (IP) which suppresses movement. It is well known that diverse functions of the BG may be made through ‘‘balance’’ between DP and IP. But, to the best of our knowledge, so far no quantitative analysis for such balance was done. In this paper, as a first time, we introduce the competition degreeCdbetween DP and IP. Then, by employingCd, we quantify their competitive harmony (i.e., competition and cooperative interplay), which could lead to improving our understanding of the traditional ‘‘balance’’ so clearly and quantitatively. We first consider the case of normal dopamine (DA) level of phi*=0.3. In the case of phasic cortical input (10 Hz), a healthy state withCd*= 2:82 (i.e., DP is 2.82 times stronger than IP) appears. In this case, normal movement occurs via harmony between DP and IP. Next, we consider the case of decreased DA level, phi = phi*(= 0.3)xDA(1 >xDA>0). With decreasingxDAfrom 1, the competition degreeCdbetween DP and IP decreases monotonically fromCd, which results in appearance of a pathological Parkinsonian state with reducedCd. In this Parkinsonian state, strength of IP is much increased than that in the case of normal healthy state, leading to disharmony between DP and IP. Due to such break-up of harmony between DP and IP, impaired movement occurs. Finally, we also study treatment of the pathological Parkinsonian state via recovery of harmony between DP and IP.



Acknowledgements

References
[1] Kim,S.-Y., & Lim, W. (2024). Quantifying harmony between direct and indirect pathways in the basal ganglia; healthy and Parkinsonian states.Cognitive Neurodynamics,18, 2809-2829.https://doi.org/10.1007/s11571-024-10119-8
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P139: Break-up and recovery of harmony between direct and indirect pathways in a spiking neural networ the basal ganglia; Huntington's disease and treatment
Monday July 7, 2025 16:20 - 18:20 CEST
P139 Break-up and recovery of harmony between direct and indirect pathways in a spiking neural networ the basal ganglia; Huntington's disease and treatment

Sang-Yoon Kim andWoochang Lim*
Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Korea
*Email: wclim@icn.re.kr

The basal ganglia (BG) in the brain exhibit diverse functions for motor, cognition, and emotion. Such BG functions could be made via competitive harmony between the two competing pathways, direct pathway (DP) (facilitating movement) and indirect pathway (IP) (suppressing movement). As a result of break-up of harmony between DP and IP, there appear pathological states with disorder for movement, cognition, and psychiatry. In this paper, we are concerned about the Huntington’s disease (HD), which is a genetic neurodegenerative disorder causing involuntary movement and severe cognitive and psychiatric symptoms. For the HD, the number of D2 SPNs (ND2) is decreased due to degenerative loss, and hence, by decreasingxD2(fraction ofND2), we investigate break-up of harmony between DP and IP in terms of their competition degreeCd, given by the ratio of strength of DP (SDP) to strength of IP (SIP) (i.e.,Cd=SDP/SIP). In the case of HD, the IP is under-active, in contrast to the case of Parkinson’s disease with over-active IP, which results in increase inCd(from the normal value). Thus, hyperkinetic dyskinesia such as chorea (involuntary jerky movement) occurs. We also investigate treatment of HD, based on optogenetics and GP ablation, by increasing strength of IP, resulting in recovery of harmony between DP and IP. Finally, we study effect of loss of healthy synapses of all the BG cells on HD. Due to loss of healthy synapses, disharmony between DP and IP increases, leading to worsen symptoms of the HD.



Acknowledgements

References
[1]Kim,S.-Y., & Lim, W. (2024). Break-up and recovery of harmony between direct and indirect pathways in the basal ganglia; Huntington's disease and treatment.Cognitive Neurodynamics,18, 2909-2924.https://doi.org/10.1007/s11571-024-10125-w
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P140: Single-unit responses to dynamic salient visual stimuli in the human medial temporal lobe
Monday July 7, 2025 16:20 - 18:20 CEST
P140 Single-unit responses to dynamic salient visual stimuli in the human medial temporal lobe

Alina Kiseleva1, Eva van Gelder1,Hennric Jokeit1, Johannes Sarnthein2, Lukas Imbach1, Tena Dubcek1& Debora Ledergerber*11Swiss Epilepsy Clinic, Clinical Neurophysiology, Zürich, Switzerland (Debora.Ledergerber@kliniklengg.ch)2Universitätsspital Zürich, Klinik für Neurochirurgie, Zürich, Switzerland

*Email:Debora.Ledergerber@kliniklengg.ch
Introduction

The medial temporal lobe (MTL) is critical for mnemonic functions, navigation and social cognition. For many of these higher-order cognitive processes, correlates of single-neuron responses have been found in different regions of human MTL. Amygdala neurons respond to emotional stimuli [1], while hippocampus (HC) and entorhinal cortex (EC) neurons encode memory [2] and navigation [3]. Efficient encoding of task covariates depends on neurons with mixed selectivity, found in rodent subiculum and EC [4]. While this coding scheme has been described in human MTL [5], it remains elusive whether it is applied differentially in different contexts.

Methods
We investigated the activity of 500 neurons in human MTL while participants watched a movie with alternating neutral and emotionally charged clips [6]. To model neuronal firing rates, we applied a Generalized Linear Model, using three covariates: trial type (Face/Landscape), size of the dominant object, and its movement across frames. We then implemented a model selection procedure to identify neurons specifically tuned to each covariate.
Results
We found the highest number of neurons encoding the difference between trials of landscapes versus emotional faces (14%). A smaller but substantial population of neurons showed specificity for the main object size and degree of movement (5% and 6%). Additionally, 3% of neurons demonstrated mixed selectivity, responding to the combination of at least two visual features.
Despite the amygdala's established role in processing of emotional stimuli, we found only a slightly increased number of neurons specific to emotional trials in the amygdala compared to HC and EC, and the difference in the proportion of emotionally responsive neurons across the MTL was not statistically significant (P > 0.9, χ² test).
Discussion
Overall, it suggests emotional stimulus processing is distributed across MTL regions and neurons encoding emotional stimuli may additionally show selectivity for other task features. The presence of mixed selectivity further highlights the integrative role of MTL neurons in processing complex visual and emotional information, potentially supporting flexible cognitive functions.




Acknowledgements
We sincerely appreciate the time and contribution of all patients who participated in this study. We are also grateful to our colleagues and collaborators for their insightful discussions and support. We extend our deep gratitude to the clinical staff for their invaluable assistance in data collection.
References
1.https://doi.org/10.1073/pnas.1323342111
2.https://doi.org/10.1523/jneurosci.1648-20.2020
3.https://doi.org/10.1038/nn.3466
4.https://doi.org/10.1016/j.celrep.2021.109175
5.https://doi.org/10.1016/j.celrep.2024.114071
6.https://doi.org/10.1038/s41597-020-00790-x
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P142: Finite-sampling bias correction for discrete Partial Information Decomposition
Monday July 7, 2025 16:20 - 18:20 CEST
P142 Finite-sampling bias correction for discrete Partial Information Decomposition

Loren Koçillari*1,2, Gabriel M. Lorenz1,4, Nicola M. Engel1, Marco Celotto1,5, Sebastiano Curreli3, Simone B. Malerba1, Andreas K. Engel2, Tommaso Fellin3, and Stefano Panzeri1
1Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
2Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
3Istituto Italiano di Tecnologia, Genova, Italy
4Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
5Department of Brain and Cognitive Sciences, Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
*Email: l.kocillari@uke.de
Introduction

A major question in neuroscience is how groups of neurons interact to generate behavior. Shannon Information Theory has been widely used to quantify dependencies among neural units and cognitive variables [1]. Partial Information Decomposition (PID) [2,3] has extended Shannon theory to decompose neural information into synergy, redundancy, and unique information. Discrete versions of PID are suitable for spike train analysis. However, estimating information measures from real data is subject to a systematic upward bias due to limited sampling [4], an issue that has been largely overlooked in PID analyses of neural data.

Methods
Here, we first studied the bias of discrete PID through simulations of neuron pairs with varying degrees of synergy and redundancy, using sums of Poisson processes with individual and shared terms modulated by stimuli. We assumed that the bias of union information (the sum of unique information and redundancy) equals that of the information obtained from stimulus-uncorrelated neurons. We found that this assumption accurately matched simulated data, allowing us to derive analytical approximations of PID biases in large sample sizes. We used this knowledge to develop efficient bias-correction methods, validating them on empirical recordings from 53,113 neuron pairs in the auditory cortex, posterior parietal cortex, and hippocampus of mice.
Results
Our results show that limited sampling bias affects all terms in discrete PIDs, with synergy exhibiting the largest upward bias. The bias of synergy grows quadratically with the number of possible discrete responses of individual neurons, whereas the bias of unique information scales linearly and has intermediate values, while redundancy remains almost unbiased. Thus, neglecting or failing to correct for this bias leads to substantially inflated synergy estimates. Simulations and real data analyses showed that our bias-correction procedures can mitigate this problem, leading to much more precise estimates of all PID components.

Discussion
Our study highlights the systematic overestimation of synergy in both simulated and empirical datasets, underscoring the need for bias-correction methods, and offers empirically validated ways to correct for this problem. These findings provide a computational and theoretical basis for enhancing the reliability of PID analyses in neuroscience and related fields. Our work informs experimental design by providing guidelines on the sample sizes required for unbiased PID estimates and supports computational neuroscientists in selecting efficient PID bias-correction methods.





Acknowledgements
This work was supported by the NIH Brain Initiative grant U19 NS107464 (to SP and TF), the NIH Brain Initiative grant R01 NS109961 and R01 NS108410, the Simons Foundation for Autism Research Initiative (SFARI) grant 982347 (to SP), the European Union’s European Research Council grants NEUROPATTERNS 647725 (to TF) and cICMs ERC-2022-AdG-101097402 (to AKE).
References
1.Quian Quiroga, R, Panzeri, S (2009). Extracting information from neuronal populations: information theory and decoding approaches.Nature Reviews Neuroscience, 10, 173-185.
2.Williams, PL, Beer, RD (2010). Nonnegative decomposition of multivariate information.arXiv preprintarXiv:1004.2515.
3.Bertschinger, N, Rauh, J, Olbrich, E, Jost, J, Ay, N. (2014). Quantifying unique information.Entropy, 16, 2161-2183.
4.Panzeri, S, Treves, A (1996). Analytical estimates of limited sampling biases in different information measures.Network: Computation in neural systems, 7, 87-107.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P143: Redundant stimulus encoding in ferret cortex during a lateralized detection task
Monday July 7, 2025 16:20 - 18:20 CEST
P143 Redundant stimulus encoding in ferret cortex during a lateralized detection task

Loren Koçillari*1,2, Edgar Galindo-Leon2, Florian Pieper2, Stefano Panzeri1, Andreas K. Engel2
1Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany

2Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), 20246 Hamburg, Germany

*Email: l.kocillari@uke.de
Introduction

The brain’s ability to integrate diverse sources of information is crucial for perception and decision-making. It can combine inputs synergistically to increase information capacity or redundantly to enhance signal reliability and robustness. Previous research has shown that redundant information between mouse auditory neurons increases during correct compared to incorrect trials in a tone discrimination task [1]. However, it remains unclear how redundancy’s behavioral role generalizes at larger scales, across frequency bands, and between unimodal and multimodal sensory stimuli. Using Partial Information Decomposition (PID) [2], we analyze redundant and synergistic information in ferret cortical activity during an audiovisual task.

Methods
We studied information processing in behaving ferrets during a visual or audiovisual stimulus detection task [3]. Brain activity from auditory, visual, and parietal areas of the left hemisphere was recorded using a 64-channel ECoG array [3]. We quantified task-related changes in single-channel local field potential (LFP) power and phase across time and frequency bands. We assessed stimulus encoding in individual channels by computing time-resolved Shannon mutual information between stimulus location and LFP power or phase. Finally, using PID, we quantified behaviorally relevant synergistic and redundant stimulus-related information conveyed by channel pairs at information peaks, in relation to correct choices and faster reaction times.

Results
We found that stimulus information, for both LFP power and phase, was primarily present in the peri-stimulus interval at lower frequency bands (theta and alpha), while beta and gamma bands contained less information. Stimulus information in the theta band was greater in hit trials than in miss trials and in fast-hit trials than in slow-hit trials, suggesting that the information content of theta activity is behaviorally relevant. Redundancy across channel pairs in the theta-band was higher in hit than in miss trials and in fast-hit trials than in slow-hit trials, whereas synergy was greater in miss and slow-hit trials.

Discussion
Our results suggest that the amount of information encoded in the theta band is behaviorally relevant for perceptual discrimination. They also indicate that redundancy is more beneficial than synergy for correct or rapid perceptual judgements during both visual and audiovisual stimulus detection. This supports the notion that the advantages of redundancy in downstream signal propagation and robustness outweigh its limitations of the total information that can be encoded across areas.





Acknowledgements
This work was supported by the cICMs ERC-2022-AdG-101097402 (to AKE).
References
1.Koçillari, L., et al. (2023). Behavioural relevance of redundant and synergistic stimulus information between functionally connected neurons in mouse auditory cortex.Brain Informatics,10(1), 34.
2.Williams, PL, Beer, RD (2010). Nonnegative decomposition of multivariate information.arXiv preprintarXiv:1004.2515.
3.Galindo-Leon, E. E., et al. (2025). Dynamic changes in large-scale functional connectivity prior to stimulation determine performance in a multisensory task.Frontiers in Systems Neuroscience,19, 1524547.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P144: Event-driven eligibility propagation: combining efficiency with biological realism
Monday July 7, 2025 16:20 - 18:20 CEST
P144 Event-driven eligibility propagation: combining efficiency with biological realism

Agnes Korcsak-Gorzo*1,2, Jesús A. Espinoza Valverde3, Jonas Stapmanns4, Hans Ekkehard Plesser5,1,6, David Dahmen1, Matthias Bolten3, Sacha J. van Albada1,7, Markus Diesmann1,2,8,9
1Institute for Advanced Simulation (IAS-6), Computational and Systems Neuroscience, Forschungszentrum Jülich, Jülich, Germany
2Fakultät 1, RWTH Aachen University, Aachen, Germany
3Department of Mathematics and Science, University of Wuppertal, Wuppertal, Germany
4Department of Physiology, University of Bern, Bern, Switzerland
5Department of Data Science, Faculty of Science and Technology, Norwegian University of Life Sciences, Aas, Norway
6Käte Hamburger Kolleg, RWTH Aachen University, Aachen, Germany
7Institute of Zoology, University of Cologne, Cologne, Germany
8JARA-Institute Brain Structure-Function Relationships (INM-10), Forschungszentrum Jülich, Jülich, Germany
9Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany


*Email: a.korcsak-gorzo@fz-juelich.de
Introduction

Understanding the neurobiological computations underlying learning is enhanced by simulations, which serve as a critical bridge between experimental findings and theoretical models. Recently, several biologically plausible learning algorithms have been proposed for simulating spiking recurrent neural networks, achieving performance comparable to backpropagation through time (BPTT) [1]. In this work, we adapt one such learning rule, eligibility propagation (e-prop) [2], to the spiking neural network simulator (NEST) optimized for large-scale simulations.

Methods
To improve computational efficiency and enable large-scale simulations, we replace the original time-driven synaptic updates - executed at every time step - with an event-driven approach, where synapses are updated only when activated by a spike. This requires storing the e-prop history between weight updates, and with optimized history management, we significantly reduce computational overhead. Additionally, we replace components inspired by machine learning with biologically plausible mechanisms and extend the model with features such as continuous dynamics, strict locality, sparse connectivity, and approximations that eliminate vanishing terms, further enhancing computational efficiency.
Results
We demonstrate that our event-driven weight update scheme accurately reproduces the behavior of the original time-driven e-prop model (see Fig. 1) while significantly reducing computational costs, particularly in biologically realistic settings with sparse activity. We validate this approach on various biologically motivated regression and classification tasks, including neuromorphic MNIST [3]. Furthermore, we show that learning performance and computational efficiency remain comparable to those of the original model, despite the incorporation of biologically inspired features. Strong and weak scaling experiments confirm the robust scalability of our implementation, supporting networks with up to millions of neurons.
Discussion
By integrating biologically enhanced e-prop plasticity into an established open-source spiking neural network simulator with a broad and active user base, we aim to facilitate large-scale learning experiments. Additionally, this work provides a foundation for implementing other three-factor learning rules from the extensive literature in an event-driven manner. By bridging AI and computational neuroscience, our approach has the potential to enable large-scale AI networks to leverage energy-efficient biological mechanisms.




Figure 1. Implementation of event-driven e-prop demonstrated on a temporal pattern generation task. Learning occurs through updates to input, recurrent, and output synapses. The upper middle plot illustrates the correspondence between the event-driven and time-driven e-prop models.
Acknowledgements
This work was supported by Joint Lab SMBH; HiRSE_PS; NeuroSys (Clusters4Future, BMBF, 03ZU1106CB); EU Horizon 2020 Framework Programme for Research and Innovation (945539, Human Brain Project SGA3) and Europe Programme (101147319, EBRAINS 2.0); computing time on JURECA (JINB33) via JARA Vergabegremium at FZJ; and Käte Hamburger Kolleg: Cultures of Research (c:o/re), RWTH Aachen (BMBF, 01UK2104).
References
1. Werbos, P. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.
2. Bellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R., & Maass, W. (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11(1), 3625.
3. Orchard, G., Jayawant, A., Cohen, G., & Thakor, N. (2015). Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades. Frontiers in Neuroscience, 9, 437.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P145: Biophysical thalamic neuron models to probe the impact of ultrasound induced heating in the brain
Monday July 7, 2025 16:20 - 18:20 CEST
P145 Biophysical thalamic neuron models to probe the impact of ultrasound induced heating in the brain

Rikinder Kour1, Ayesha Jameel2,3, Joely Smith3,4, Peter Bain5,6, Dipankar Nandi5,6, Brynmor Jones3, Rebecca Quest3,4, Wladyslaw Gedroyc2,3, Roman Borisyuk7,Nada Yousif1*

1School of Physics Engineering and Computer Science, University of Hertfordshire, UK
2Department of Surgery and Cancer, Imperial College London, UK
3Department of Imaging, Imperial College Healthcare NHS Trust, London, UK
4Department of Bioengineering, Imperial College London, UK
5Division of Brain Sciences, Imperial College London, UK
6Department of Neurosciences, Imperial College Healthcare NHS Trust, London, UK
7Department of Mathematics and Statistics, University of Exeter, Exeter, UK


* Email: n.yousif@herts.ac.uk
Introduction

High intensity focussed ultrasound (HIFU) is used for ablating thalamic neurons to treat tremor [1]. Low intensity focussed ultrasound (LIFU) can be used for neuromodulation [2] and previous modelling suggests that LIFU induces neuronal excitation via mechanical modulation of the cell membrane [3,4]. Although modelling of neural effects of HIFU is limited, understanding the effects of heating during HIFU at sub-ablative temperatures is important, as this is used for monitoring side effects and clinical improvement during tremor treatment [5]. Here we modified biophysical thalamocortical neuron models [6,7] to look at the change in firing patterns as HIFU induced heating approaches ablative temperatures.


Methods
First, we used data from magnetic resonance thermography performed during a HIFU treatment to select the temperature value for the ‘celsius’ parameter in NEURON [8]. We then examined the effect of temperature on the neuronal firing, as mediated by the parameters of gating equations [9]. Next, we added temperature dependence for the membrane capacitance, as shown experimentally [10] and in a previous modelling study [11]. We compared the effect of temperature in single neurons with one, three and 200 compartments under current clamp conditions with different input current levels [6]. Finally, we considered the impact of increasing temperature on a small network of two excitatory thalamic neurons [7] and two inhibitory reticular neurons.
Results
The thermography data (Fig. 1A) shows that at the HIFU target site, the temperature increased up to 62°C for a treatment sonication. With temperature dependent parameters of the gating equations, increasing temperatures lead to inhibition of the neuron (Fig. 1B). Interestingly, when including a temperature dependent membrane capacitance, we observed a similar pattern of results. Furthermore, we also saw the same effect of temperature on firing rate regardless of the number of compartments modelled. Finally, the network model showed that although with changing temperature the firing of the individual neurons both increased and decreased, we still observe an overall termination of firing in all neurons as the temperature exceeds 40°C.

Discussion
HIFU is commonly used to thermally ablate the thalamus and suppress tremor, via application of ultrasound energy called sonications. Test sonications are used to heat the tissue to sub-ablative temperatures to confirm targeting and test for adverse effects. This study looked at the impact of such sub-ablative heating on single neuron models and a small network representative of the target region. Our results indicate that once temperatures exceed 40°C neuronal firing is completely inhibited. Future work will extend the network model to look at downstream effects of heating. Such work will allow us to better understand the link between subablative temperature increases, suppression of tremor and adverse effects for optimising treatment.



Figure 1. Figure1: (A) The heating induced in by a HIFU treatment sonication. The target is at the centre of the image and the temperature reaches 62°C. (B) The results from simulating a single compartment thalamocortical neuron at different temperatures, when the neuron has only temperature dependent gating equations (black) and when the membrane capacitance has temperature dependence (red).
Acknowledgements
NY is funded by the Royal Academy of Engineering and the Leverhulme Trust and AJ is partially funded by Funding Neuro.
References[1] 10.3389/fneur.2021.654711 [2] 10.1016/j.cub.2013.10.029 [3] 10.1523/ENEURO.0136-15.2016 [4] 10.1088/1741-2552/ab1685 [5] doi.org/10.1002/ana.26945 [6] 10.1523/JNEUROSCI.18-10-03574.1998 [7] 10.1152/jn.1996.76.3.2049 [8] 10.1007/978-1-4614-7320-6_795-2 [9] 10.1007/978-1-4614-7320-6_236-1 [10] 10.1016/0301-4622(94)00103-Q [11] 10.3389/fncom.2022.933818
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P146: Fast Visual Reorientation in Postsubicular Head-Direction Cells Conditional on Cue Visibility
Monday July 7, 2025 16:20 - 18:20 CEST
P146 Fast Visual Reorientation in Postsubicular Head-Direction Cells Conditional on Cue Visibility

Sven Krausse1,2, Emre Neftci1,2,Alpha Renner*1
1Forschungszentrum Jülich, Aachen, Germany
2RWTH Aachen, Aachen, Germany

*Email: a.renner@fz-juelich.de
Introduction

Accurate spatial navigation relies on head-direction (HD) cells, which encode orientation in allocentric coordinates, like a neural compass [1,2]. Found, e.g., in postsubiculum (PoSub) and thalamus, HD cells integrate angular velocity signals from vestibular, proprioceptive, and optic flow inputs, recalibrating via visual cues [2] to avoid drift. Reorientation speed after cue absence is key to understanding the HD system’s dynamics and for bio-inspired models.[3]reported rapid reorientation, while[4]suggested an internal gain factor modulates it, though its mechanism remains unclear. Using a new dataset [5], we examine reorientation dynamics, finding it is fast but contingent on cue visibility.
Methods
We analyzed a dataset [5] containing head tracking and PoSub spike trains from six mice. Internal HD was decoded from spikes using a Bayesian approach [5]. Mice navigated a circular platform with dim LED cues (Fig. 1a) alternating between adjacent walls in 16 trials.Trials were excluded if >20% of the first minute after a cue switch had unreliable tracking, if movement ceased for >5 s, or if HD failed to reorient. Using head tracking data, we reconstructed each mouse’s visual field (FOV = 180°) to estimate cue visibility. Reorientation speed was quantified via exponential fits (scipy.optimize.curve_fit). Time constants (τ) were constrained to 0.1–3 s, with magnitude limits of 0–90°. Aligned mean error fits (Figs. 1b,c) estimated unconstrained τ, magnitude, and no delay.
Results
In Fig. 1d, after a cue switch, decoding error decreases from 90° as HD reorients. Reorientation does not always occur immediately but around when the cue becomes visible. Comparing error aligned in time by cue switch (Fig. 1b) vs. fitted delay (1c), the latter improves alignment and yields faster τ. Fig. 1e suggests that fitted switching times can be predicted from the mouse’s FOV, but only for “reorientation” trials (blue) where the cue appeared outside the FOV. Cues appearing within the FOV may cause a conflict between reanchoring and reorientation due to the lack of a dark phase between trials. Prediction cannot be perfect as pupil orientation and blinking are unknown. Based on these preliminary results we develop a model of reorientation dynamics to capture additional effects.
Discussion
Consistent with [3], we confirm that reorientation occurs in abrupt jumps, but alignment must consider visual FOV rather than assuming omnidirectional vision. While in [3]. mice were trained to fixate cues, FOV’s role may seem trivial but is often ignored. Our findings offer a better mechanistic understanding of the gain factor that mediates reorientation speed found by [4] in thalamus, which is not yet mechanistically explained. More broadly, our results contribute to an integrative model of HD reorientation and reanchoring, advancing both neuroscientific understanding and bio-inspired navigation systems (which we plan to build in the future [6]).



Figure 1. Fig. 1 a. Arena, platform, cues and FOV b. Decoding error aligned by cue switch c. Error aligned by fitted internal HD switch d. Single trial where cue switch occurs roughly as the cue enters FOV. Difference between red and black curves is decoding error (blue). e. Estimated time until cue becomes visible vs. fitted delay. Diagonal in black, points where cue appears within FOV in grey.
Acknowledgements
This research was funded by VolkswagenStiftung [CLAM 9C854]. For this work, the data from Duszkiewicz et al. (2024) [1] was used, and we thank the authors for making this data available. We especially thank Adrian Duszkiewicz for answering our questions and providing additional advice on the data. We thank Johannes Leugering, Friedrich Sommer and Paxon Frady for their feedback.
References
[1] Rank, J. B. (1984). Head-direction cells in the deep layers of dorsal presubiculum of freely moving rats. In Soc. Neuroscience Abstr. (Vol. 10, p. 599).
[2] Taube et al. (1990).https://doi.org/10.1523/JNEUROSCI.10-02-00420.1990
[3] Zugaro et al. (2003).https://doi.org/10.1523/JNEUROSCI.23-08-03478.2003
[4] Ajabi et al. (2023).https://doi.org/10.1038/s41586-023-05813-2
[5] Duszkiewicz et al. (2024).https://doi.org/10.1038/s41593-024-01588-5
[6] Krausse et al. (2025).https://doi.org/10.48550/arXiv.2503.08608
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P147: Latency correction in sparse neuronal spike trains with overlapping global events
Monday July 7, 2025 16:20 - 18:20 CEST
P147 Latency correction in sparse neuronal spike trains with overlapping global events

Arturo Mariani1, Federico Senocrate1, Jason Mikiel-Hunter2, David McAlpine2, Barbara Beiderbeck3,


Michael Pecka4, Kevin Lin5,Thomas Kreuz6,7∗


1Department of Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy


2Department of Linguistics, Macquarie University, Sydney, Australia


3 Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität, Munich, Germany


4 Division of Neurobiology, Faculty of Biology, Ludwig-Maximilians-Universität, Munich, Germany


5École Nationale Supérieure de l’Électronique et de ses Applications, Cergy, France


6Institute for Complex Systems (ISC), National Research Council (CNR), Sesto Fiorentino, Italy


7National Institute of Nuclear Physics (INFN), Florence Section, Sesto Fiorentino, Italy



* Email: thomas.kreuz@cnr.it


Introduction
In Kreuz et al., J Neurosci Methods 381, 109703 (2022)[1]two methods were proposed that perform latency correction, i.e., optimise the spike time alignment of sparse neuronal spike trains with well-defined global spiking events. The first one based on direct shifts is fast but uses only partial latency information, while the other one makes use of the full information but relies on the computationally costly simulated annealing. Both methods reach their limits and can become unreliable when successive global events are not sufficiently separated or even overlap.





Methods
Here[2]we propose an iterative scheme that combines the advantages of the two original methods by using in each step as much of the latency information as possible and by employing a very fast extrapolation direct shift method instead of the much slower simulated annealing.




Results
We illustrate the effectiveness and the improved performance, measured in terms of the relative shift error, of the new iterative scheme not only on simulated data with known ground truths but also on single-unit recordings from two medial superior olive neurons of a gerbil. The iterative scheme outperforms the existing approaches on both the simulated and the experimental data. Due to its low computational demands, and in contrast to simulated annealing, it can also be applied to very large datasets.

Discussion
The new method generalises and improves on the original method both in terms of accuracy and speed. Importantly, it is the only method that allows to disentangle global events with overlap.





Acknowledgements
J.M.H. and B.B. were supported in this study by an Australian Research Council Laureate Fellowship (FL 160100108) awarded to D.M.
References
[1]


Kreuz, T., Senocrate, F., Cecchini, G., Checcucci, C., Mascaro, A.L.A., Conti, E., Scaglione, A. and Pavone, F.S., 2022
Latency correction in sparse neuronal spike trains
J. Neurosci. Methods 381, 109703 (2022)
http://dx.doi.org/10.1016/j.jneumeth.2022.109703


[2]


Mariani, A., Senocrate, F., Mikiel-Hunter, J., McAlpine, D., Beiderbeck, B., Pecka, M., Lin, K. and Kreuz, T., 2025
Latency correction in sparse neuronal spike trains with overlapping global events
Journal of Neuroscience Methods 110378
https://doi.org/10.1016/j.jneumeth.2025.110378
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P148: ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P148 ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks

Laura Kriener¹⸴²
Ben von Hünerbein*¹
Kristin Völk³
Timo Gierlich¹
Federico Benitez¹
Walter Senn¹
Mihai A. Petrovici¹

¹ Department of Physiology, University of Bern, 3012 Bern, Switzerland.
² Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
³ Catlab Engineering GmbH, Grafrath, Germany

*Email: ben.vonhuenerbein@unibe.ch



Introduction
To learn complex action sequences, neural networks must maintain memories of past states. Typically, the required transients are produced by strong network recurrence. The biological plausibility of existing solutions for recurrent weight learning suffers from issues with locality (BPTT [1]), resource scaling (RTRL [2]), or parameter scales (FORCE [3]). To alleviate these, we introduce dendritic computation and a static structural scaffold to our recurrent networks. Leveraging this, our always-on local plasticity rule carves out strong attractors which generate the target activation sequences. We show that with few neurons, our model learns to reproduce complex non-Markovian sequences robustly despite external disturbances.
Methods
Our network contains two populations of structured neurons with somatic and dendritic compartments and leaky-integrator dynamics that integrate presynaptic inputs (Fig. 1a1). Output rates are computed as non-linear functions on the voltage. During development, a sparse scaffold of static somato-somatic connections with random delays is formed (Fig. 1a2,3). A teacher nudges output neurons towards a target pattern, and the somato-somatic scaffold transports this signal throughout the network. The dense, plastic, and randomly delayed somato-dendritic weights (Fig. 1a4)use these signals to adapt based on a local error-correcting learning rule [4].This gives rise to a robust dynamical attractor which generates the correct output pattern in the absence of a teacher.
Results
We demonstrate our model's ability to learn complex, non-Markovian sequences by exposing it repeatedly to a sample of Beethoven's "Für Elise" (Fig. 1b). We find that learning the recurrent weights is critical by showing that our model outperforms a same-size reservoir, both in its ability to learn and then sustain a pattern during replay (Fig. 1c). Next, we demonstrate robust learning across large ranges of the network parameter space. Further, despite severe temporary disruptions of the output population activity during pattern replay, the network is able to recover a correct replay of the learned pattern. Finally, we show that our network is able to extract the denoised signal from noisy target activities.
Discussion
Compared to other models of sequence learning in cortex, we suggest that ours is more resource-efficient, more biologically plausible, and, in general, more robust. It starts with only a sparse, random connection scaffold generating weak and unstructured activity. We show that this is enough for local plasticity to extract useful information in order to imprint strong attractor dynamics, in a manner that is robust to parameter variability and external disturbance. Unlike other approaches, learning in our networks is phaseless and is not switched off during validation and replay.




Figure 1. (a) Development and learning in ELiSe. (a1) Sparse somato-somatic scaffold based on p and q (a2) with interneuron driven inhibition (a3). Dense somato-dendritic synapses (green) adapted during learning (a4). (b) Learning in early, intermediate and final stages (teacher removal at red line). (c) Learning accuracy and stability during learning and replay compared to an equivalent reservoir.
Acknowledgements
We thank Richard Hahnloser and his lab for valuable feedback on learning in songbirds. We gratefully acknowledge funding from the European Union for the Human Brain Project (grant #945539) and Fenix Infrastructure resources (grant #800858), the Swiss National Science Foundation (grants #310030L\_156863 and #CRSII5\_180316) and the Manfred Stärk Foundation.


References
[1] Werbos,Paul J. "Backpropagation through time: what it does and how to do it" Proceedings of the IEEE 78.10 (1990): 1550-1560.
[2] Marschall,Owen,Kyunghyun Cho, and Cristina Savin. "A unified framework of online learning algorithms for training recurrent neural networks." Journal of machine learning research 21.135 (2020): 1-34.
[3]Sussillo,David, and Larry F. Abbott. "Generating coherent patterns of activity from chaotic neural networks" Neuron 63.4 (2009): 544-557.
[4] Urbanczik, Robert, and Walter Senn. "Learning by the dendritic prediction of somatic spiking" Neuron 81.3 (2014): 521-528.


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P149: Ion Channel Contributions to Spike Timing Precision in Computational Models of CA1 Pyramidal Neurons: Implications for Channelopathies
Monday July 7, 2025 16:20 - 18:20 CEST
P149 Ion Channel Contributions to Spike Timing Precision in Computational Models of CA1 Pyramidal Neurons: Implications for Channelopathies


Anal Kumar*1, Upinder S. Bhalla1

1National Centre for Biological Science, Tata Institute of Fundamental Research, Bangalore, India

*Email:analkumar@ncbs.res.in
Introduction

Precise neuronal spike timing is essential for encoding [1,2], phase coding [3,4], and spike-timing-dependent plasticity (STDP) [5,6]. Disruptions in spike timing precision (SpTP) are linked to disorders such as auditory processing disorder [7] and autism spectrum disorder (ASD) [8,9]. These conditions are also associated with channelopathies [8], yet the specific contributions of different ion channels to SpTP remain unclear. In this study, we use computational models of CA1 pyramidal neurons to systematically examine how ion channel overexpression and underexpression affect SpTP, providing insights into disease mechanisms and potential therapeutic targets.


Methods
We constructed data-driven, conductance-based models of CA1 pyramidal neurons, incorporating realistic electrotonic, passive, and active features based on experimental recordings. Twelve ion channel subtypes were included, with kinetics derived from prior studies. To evaluate SpTP, we analyzed the coefficient of variation of inter-spike intervals and jitter slope across multiple trials of tonic 150 pA current injections. Gaussian noise was added to these current injections to simulate physiological noise. To determine the impact of early vs late activating ion channels on SpTP, we assessed SpTP separately for initial and later spikes in the spike train.


Results
Due to heterogeneity in the Gbar of ion channels across models, individual models exhibited variable effects of Gbar on SpTP. However, some global trends emerged:
● Initial spikes in the action potential train: SpTP negatively correlated with HCN and persistent sodium (Na_P) channels, while Kv3.1 showed a positive correlation. Transient sodium (Na_T) channels exhibited a non-monotonic relationship.
● Later spikes in the action potential train: SpTP negatively correlated with Na_P, whereas Kv3.1, K_SK, K_BK, and K_P showed a positive correlation.


Other channels, including K_P, K_T, K_M, K_D, and calcium channels (LVA, HVA), showed no significant impact on SpTP across trials.


Discussion

Previous studies have reported increased K_SK currents and reduced SpTP of later spikes in Fragile X Syndrome (FXS) [8]. Our findings corroborate this by demonstrating a positive correlation between K_SK Gbar and SpTP of later spikes, suggesting that K_SK upregulation may contribute to impaired temporal precision in FXS. Additionally, our study identifies potential therapeutic targets, such as Na_P channel blockade, which may help counteract the SpTP deficits observed in FXS. Further analysis of these models will help uncover the underlying mechanisms driving these correlations, shedding light on the role of ion channel dysfunction in neurodevelopmental disorders.





Acknowledgements
We thank NCBS, TIFR and Department of Atomic Energy, Government of India, under project identification No. RTI 4006 for funding. Special thanks to Dr. Deepanjali Dwivedi and Anzal KS for the raw experimental recordings. Thanks to NCBS animal house, Imaging facility, super computing facility at NCBS and members of Bhalla Lab.
References
● https://doi.org/10.1126/science.1149639
● https://doi.org/10.1103/PhysRevLett.80.197
● https://doi.org/10.1038/nature02058
● https://doi.org/10.1002/hipo.450030307
● https://doi.org/10.1523/JNEUROSCI.18-24-10464.1998
● https://doi.org/10.1126/science.275.5297.213
● https://doi.org/10.1016/j.heares.2015.06.014
● https://doi.org/10.1523/ENEURO.0217-19.2019
● https://doi.org/10.1016/j.neuron.2017.12.043


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P150: Network vs ROI Perspectives: Brain Connectivity Analysis using Complex Principal Component Analysis
Monday July 7, 2025 16:20 - 18:20 CEST
P150 Network vs ROI Perspectives: Brain Connectivity Analysis using Complex Principal Component Analysis

Puneet Kumar*1†, Alakhsimar Singh2†, Xiaobai Li1,3, Shella Keilholz4, Eric H. Schumacher5


1University of Oulu, Finland
2National Institute of Technology Jalandhar, India
3Zhejiang University, China
4Emory University, USA
5Georgia Institute of Technology, USA

†Equal Contribution
*Email: puneet.kumar@oulu.fi

Introduction: We implement Complex Principal Component Analysis (CPCA) [1] for brain connectivity analysis. It largely reproduces traditional Quasi-Periodic Patterns (QPP)-like activity [2] and handles tasks of various lengths, while QPP struggles with shorter tasks. We present network- and ROI-level observations for the Human Connectome Project (HCP) data having four 15-min rest scans (TR=0.72s) and seven tasks (1 hour total) [3]. Our focus is on Task-Positive Network (TPN) – defined as the Dorsal Attention Network (DAN) plus Fronto-Parietal Network (FPN), and Default Mode Network (DMN). Our contributions are CPCA implementation and dual (network and ROI) level analysis. The implementation code is at github.com/MIntelligence-Group/DBCATS.
Methods: The data was preprocessed using the Configurable Pipeline for the Analysis of Connectomes (C-PAC) [4], including motion and slice-timing correction, normalization to MNI space, and band-pass filtering (0.01–0.1 Hz). We focus on the working memory (0-back/2-back) task with 405 frames/run. Each run has eight 42.5 s task blocks (10 trials of 2.5 s), four 15 s fixation blocks, and 2 s stimuli followed by a 500 ms ITI. The DMN (36 ROIs), DAN (33 ROIs), and FPN (30 ROIs) were defined using the 7-network parcellation [5]. We adapted CPCA for fMRI by applying the Hilbert transform to introduce a 90° phase shift, capturing amplitude and phase. Seven principal components (PCs) were extracted to reconstruct the dominant activity patterns.
Results: Fig. 1(a) and 1(e) display Blood Oxygenation Level Dependent (BOLD) activation at the global network level for rest and task states. Correlation values between DMN and DAN are -0.99, and between DMN and FPN -0.91, as per Fig. 1(i) and 1(j). Fig. 1(b–d) depicts local ROI-level BOLD activation (from both left and right hemispheres of the brain) during rest, and Fig. 1(f–h) during task. In the rest state, FPN shows 440 positive and 618 negative correlations with DMN, and DAN shows 531 and 629. For the task state, FPN has 439 positive and 620 negative correlations with DMN, and DAN has 532 and 629. Comparing Fig. 1(k–n) indicates slightly shifted connectivity patterns from rest to task, reflecting changes in DMN, DAN, and FPN signals.
Discussion: At the network level, DMN shows anticorrelation with both DAN (-0.99) and FPN (-0.91), as depicted in Fig. 1(i,j). At the ROI level, 44% (1972) of DMN-TPN pairs are positively correlated, while 56% (2496) are negative, indicating local differences. Correlations become more negative from rest to task, though changes are modest. Fig. 1(k–n) shows these changes, highlighting how brain connections adapt at the ROI level and exhibit task-dependent shifts. We are the first to implement CPCA as a potential brain connectivity analysis method comparing rest and task. We aim to extend our implementation to other datasets, seeking visibility for our work and findings and feedback to refine our approach and drive further advancements.



Figure 1. Network-level and ROI-level BOLD time series for DMN, DAN, and FPN during rest (a–d) and task (e–h). Network-level correlation connectivity matrices (CCM) (i, j). ROI-level CCMs for DMN–DAN regions (k, l) and DMN-FPN regions (m, n) for rest and task. (a, e) show average PC1 activity at network level, while (b–d, f–h) show PC1 activity at ROI level, with different colors denoting different ROIs.
Acknowledgements
The authors gratefully acknowledge the collaboration with the CoNTRoL Lab and GSU/GT Center for Advanced Brain Imaging at Georgia Institute of Technology, USA, and the Keilholz Mind Lab at Emory University, USA. We thank the CMVS International Research Visit Program 2024 for funding and the University of Oulu, Eudaimonia Institute, and CSC Finland for support and computational resources.
References
[1] Bolt, T.,... (2022). A Parsimonious Description of Global Functional Brain Organization in Three Spatiotemporal Patterns. Nature Neuroscience, 25(8), 1093-1103.
[2] Abbas, A.,... (2019). Quasi-Periodic Patterns Contribute to Brain Functional Connectivity. Neuroimage, 191, 193-204.
[3] Van Essen, D. C.,... (2012). The Human Connectome Project. Neuroimage, 62(4), 2222-2231.
[4] Craddock, C.,... (2013). Towards Automated Analysis of Connectomes. Front Neuroinform, 42 (10).
[5] Yeo, B. T.,... & Buckner, R. L. (2011). The Organization of Human Cerebral Cortex. Journal of Neurophysiology.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P151: Between near and far-fields: influence of neuronal morphology and channel density on EEG-like signals
Monday July 7, 2025 16:20 - 18:20 CEST
P151 Between near and far-fields: influence of neuronal morphology and channel density on EEG-like signals

Paula T. Kuokkanen*1, Richard Kempter1,2,3, Catherine E. Carr4, Christine Köppl5

1Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
2Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
3Einstein Center for Neurosciences Berlin, 10115 Berlin, Germany
4Department of Biology, University of Maryland College Park, College Park, MD 20742
5Department of Neuroscience, School of Medicine and Health Sciences, Research Center for Neurosensory Sciences and Cluster of Excellence “Hearing4all” Carl von Ossietzky University, 26129 Oldenburg, Germany

*Email: paula.kuokkanen@hu-berlin.de
Introduction

Both the near and far fields of extracellular neural recordings are well understood. The near field can be explained by models of ion channel activity in nearby compartments [1]. The far field can be approximated by current dipoles produced by the membrane currents of multicompartmental cells [2]. The dipole spanned between the dendrites and soma is typically assumed to be the basis of the electro-encephalography (EEG) signals of cortical pyramidal neurons [e.g. 3]; yet also their somatic spikes can be observed in the EEG [4]. Such potentials, measured relatively far away from the source but not strictly in the far field, are highly dependent on the morphology of the cell, its ion channel concentrations, and the electrodes’ positions [5].

Methods
We simulate single multi-compartment cells with NEURON and LFPy packages to study their 'mid-field' potentials. We vary the neurons’ simplified morphologies systematically, and use combinations of channel densities to compare the mid-field potentials with the dipole moments of the cells. We especially study the spatial limitation of the far-field approximation depending on the cell properties. We verify our results with the use of experimental data [6], EEG-like single-cell recordings from the auditory nerve and the auditory brainstem Nucleus Magnocellularis in the barn owl.

Results
We observe that, as expected, the dendritic-somatic dipole can determine the far and mid-fields in pyramidal cell-like morphologies. Unexpectedly, we observe that a dipole moment caused by branching axons can have a similar amplitude to the dendritic dipole in mid and far fields. Furthermore, we show that under certain conditions a somatic spike — not necessarily related to any current dipole — can contribute to fields even at a distance of 10 mm from the soma. These results match with our experimental results from the owl.
Discussion
Common assumptions about the distances from a neuronal source where far-field conditions predominate may not hold. Depending on the neuron type, both the morphology and differential densities of active ion channels across cell compartments can play a large role in creating their fields at varying distances. The axonal arborizations, because activated simultaneously by a single spike, can create a dipole [7] with a surprisingly large contribution to the far fields as compared to the dendritic-somatic dipoles. Furthermore, large somata with high densities of active currents can contribute to the extracellular field at distances of even 1 cm, violating the usual far-field assumption.



Acknowledgements
We thank Ghadi ElHasbani for helpful discussions, and Hannah Schultheiss for preliminary modeling.
This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) grant nr. 502188599.
References
1. https://doi.org/10.1152/jn.00979.2005
2. https://doi.org/10.1016/j.neuroimage.2020.117467
3. https://doi.org/10.7554/eLife.51214
4. https://doi.org/10.1016/j.neuroimage.2014.12.057
5. http://doi.org/10.1097/00004691-199709000-00009
6. https://doi.org/10.1101/2024.05.29.596509
7. https://doi.org/10.7554/eLife.26106


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P152: Differing Strategies and Neural Representations in the Same Long-Term Information Encoding Task
Monday July 7, 2025 16:20 - 18:20 CEST
P152 Differing Strategies and Neural Representations in the Same Long-Term Information Encoding Task

Tomoki Kurikawa*1

1Department of Complex and Intelligence Systems, Future University Hakodate, Hakodate, Japan
*Email: kurikawa@fun.ac.jp

Introduction

Many cognitive tasks require maintaining information across trials, such as deterministic or probabilistic reversal learning tasks. In the deterministic reversal learning task [1], for instance, the pairing between sensory cues and behavioral outcomes reverses after a fixed number of trials. To perform such a task successfully, subjects have to track the number of elapsed trials to predict reversals accurately. However, the neural representation underlying such sustained memory processes remains poorly understood.



Methods
To uncover the representations underlying task performance, we built a simple recurrent neural network (RNN) model trained on a deterministic reversal learning task using machine learning techniques. We analyzed what representations emerged and how they were formed. In this task, there were two types of blocks, and depending on the block type, the network had to alternate between two outputs (Left and Right outputs). Each block consisted of 10 trials, and the block type switched every 10 blocks iteratively. Notably, no explicit contextual cues were provided—the network had to track trial counts internally. The model was trained to produce correct outputs across 10 consecutive blocks.


Results
We found that two distinct strategies emerged after learning 10 blocks: generalization and specification. In the generalization strategy, the network discovered the underlying rule of the task. Despite being trained on only 10 blocks, it could generalize and perform correctly beyond this limit. In contrast, in the specification strategy, the network was specifically trained to complete the 10-block task but was unable to extend its performance to a larger number of blocks, such as a 20-block task.
What representations underlie these different behaviors? Our analysis revealed that different neural representations support these distinct strategies. In the generalization strategy, certain neurons specifically encoded the number of trials within a block. Their activity gradually increased across trials, and when a threshold was reached, the network switched outputs from one output to another before resetting, indicating that these neurons tracked the number of trials within a block.
In contrast, in the specification strategy, no individual neurons encoded trial counts explicitly. Instead, this information was distributed across the neural population, implying a different mechanism for task execution.




Discussion
Our findings suggest that even when performing the same task, different strategies can emerge across subjects or animals. Depending on the adopted strategy, the way long-term information is encoded across trials also varies. This computational result provides new insights into how long-term information is represented in neural systems.





Acknowledgements
The present work is supported bySpecialResearch Expenses in Future University Hakodate
References
https://doi.org/10.1523/ENEURO.0172-24.2024
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P153: Efficient estimation of mutual-information rates from spiking data by maximum-entropy models
Monday July 7, 2025 16:20 - 18:20 CEST
P153 Efficient estimation of mutual-information rates from spiking data by maximum-entropy models

Tobias Kühn*1, Gabriel Mahuas1, Ulisse Ferrari1

1Institut de la Vision, Sorbonne Université, CNRS, INSERM, Paris, France

*Email: tkuehn@posteo.de
Introduction

Neurons in sensory systems encode stimulus information into their stochastic spiking response. This is quantified by the mutual-information rate (MIR), given by the ratio of the mutual information between the activity of a spiking neuron and a (dynamical) stimulus and time. The computation of the MIR is challenging, because it requires the estimation of entropies, in particular the ones conditional on the stimulus. However, this is difficult in the realm of correlated, poorly sampled data, for which estimates are prone to biases.
Methods
We here present moment-based the mutual-information-rate approximation (Moba-MIRA), a computational method to estimate the MIR. It is based on the idea of taking into account the statistics of the activity single time bins exactly and consider the correlations of the activity between them by employing a statistical model featuring pairwise interactions, similar to the Ising model of statistical physics. This is similar to other maximum-entropy approaches employed in neuroscience, however, we do not restrict our spike counts to be binary, allowing the use of relatively large time bins. To achieve the estimate of the entropies, we use a (Feynman) diagrammatic expansion in the covariances between the activities of all time bins [1,2,3].

ResultsWe test our method on artificial data from a generalized linear model mimicking the activity of retinal ganglion cells and demonstrate that we approximate the exact result in the well-sampled regime in a satisfactory way. Importantly, our method introduces only a limited bias even in case of a number of samples attainable in experiments, about 60 to 100, allowing it to use it for to real data. Applying it to ex-vivo electrophysiological recordings from rat retinal-ganglion cells (on and off), stimulated by black-and-white checkerboards or bars moving in a random way, we obtain information rates of about 2 to 20 bits/s for every neuron, consistent with values from the literature.

DiscussionTested on artificial data, Moba-MIRA outperforms the state-of-the-art method [4] - depending on the variant clearly in speed, with comparable precision, or in precision, with comparable speed, compare figure. We therefore believe that it can serve as a efficient and simple tool for the analysis of spiking data. In particular, extending it to be applicable to populations of neurons is easy, so that it will allow the study of collective effects in addition to the effects coming about by neuronal dynamics.




Figure 1. a) Estimate of the MIR for artificial, retina-like data with state-of-the-art method by Strong et al. (histogram) and our approach. In the latter, we estimate the entropy conditional on the stimulus by a maximum-entropy model, for which we show the compute time in panel b.
Acknowledgements
We acknowledge ANR for financial support.
References
[1] Tobias Kühn and Moritz Helias. Expansion of the effective action around non-gaussian theories. Journal of Physics A: Mathematical and Theoretical, 51(37):375004, Aug 2018.
[2] Tobias Kühn and Frédéric van Wijland. Diagrammatics for the inverse problem in spin systems and simple liquids. Journal of Physics A: Mathematical and Theoretical, 56(11):115001, Feb 2023.
[3] Gabriel Mahuas, Olivier Marre, Thierry Mora, and Ulisse Ferrari. Small-correlation expansion to quantify information in noisy sensory systems. Phys. Rev. E, 108:024406, Aug 2023.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P154: Comparison of derivative-based and correlation-based methods to estimate effective connectivity in neural networks
Monday July 7, 2025 16:20 - 18:20 CEST
P154 Comparison of derivative-based and correlation-based methods to estimate effective connectivity in neural networks


Niklas Laasch1, Wilhelm Braun1,2, Lisa Knoff1, Jan Bielecki2, Claus C. Hilgetag1,3


1Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany


2Faculty of Engineering, Department of Electrical and Information Engineering,Kiel University, Kaiserstrasse 2, 24143, Kiel, Germany

3Department of Health Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA, 02215, USA



E-Mail:niklas.laasch@posteo.de
Introduction
Inferring effective connectivity in neural systems from observed activity patterns remains a challenge in neuroscience. Despite numerous techniques being developed, no universally accepted method exists for determining how network nodes mechanistically affect one another. This limits our understanding of neural network structure and function. We focus on purely excitatory networks of small to intermediate size with continuous dynamics to systematically compare different connectivity estimation approaches, aiming to identify the most reliable methods for specific network characteristics.Methods
We used the Hopf neuron model with known ground truth structural connectivity to generate synthetic neural activity data. Multiple connectivity inference algorithms were applied to reconstruct the system's connectivity matrix, including lagged cross-correlation (LCC) [1], derivative-based covariance analysis (DDC) [2], and transfer entropy methods. We varied parameters controlling bifurcation, noise, and delay distribution to test method robustness. Forward simulations using estimated connectivity matrices were performed to evaluate each method's ability to recreate observed activity patterns. Finally, we applied promising methods to empirical data fromC. elegans.Results
In sparse non-linear networks with delays, combining LCC with DDC analysis provided the most reliable connectivity estimation. LCC performed comparably to transfer entropy in linear networks but at significantly lower computational cost. Performance was optimal in small sparse networks and decreased in larger, denser configurations. With the Hopf model, LCC-based connectivity estimates yielded higher trace-to-trace correlations than derivative-based methods for sparse noise-driven systems. When applied toC. elegansneural data, LCC outperformed more computationally expensive methods, including a reservoir computing approach.Discussion

Our findings demonstrate that a comparatively simple method - lagged cross-correlation - can reliably estimate directed effective connectivity in sparse neural systems despite spatio-temporal delays and noise. This has significant implications for biological research scenarios where only neuronal activity, but not connectivity or single-neuron dynamics, is observable. We provide concrete suggestions for effective connectivity estimation in such common research scenarios. Our work contributes to bridging the gap between observed neural activity and underlying network structure in neuroscience.



Acknowledgements
The authors would like to thank Kayson Fakhar, Alexander Schaum, Fatemeh Hadaeghi, Arnaud Messé, Gorka Zamora-López and Heike Siebert for useful comments.
References
[1] 10.1038/s41598-025-88596-y
[2]10.1073/pnas.2117234119
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P155: Bayesian Modelling of Explicit and Implicit Timing
Monday July 7, 2025 16:20 - 18:20 CEST
P155 Bayesian Modelling of Explicit and Implicit Timing

Gianvito Laera*1,2,3,4, Matthew Vowels5,6,7, Tasnim Daoudi1,2, Richard Andrè1,2, Sam Gilbert8, Sascha Zuber2,3, Matthias Kliegel1,2,3, Chiara Scarampi2,3

1Cognitive Aging Lab (CAL), Faculty of Psychology and Educational Sciences, University of Geneva, Switzerland
2Centre for the Interdisciplinary Study of Gerontology and Vulnerability, University of Geneva, Switzerland
3LIVES, Overcoming Vulnerability: Life Course Perspective, Swiss National Centre of Competence in Research, Switzerland
4University of Applied Sciences and Arts Western Switzerland HES-SO, Geneva School of Health Sciences, Geneva Musical Minds lab (GEMMI lab), Geneva, Switzerland
5Institute of Psychology, University of Lausanne, Switzerland
6The Sense Innovation and Research Center, CHUV, Switzerland
7Centre for Vision, Speech and Signal Processing, University of Surrey, Switzerland
8Institute of Cognitive Neuroscience, University College London, London, United Kingdom


*Email: gianvito.laera@unige.ch

Introduction
Time perception supports adaptive behavior by allowing anticipation of critical events [1]. Explicit timing involves conscious estimation of durations (e.g., interval reproduction), typically modeled by Bayesian frameworks combining noisy sensory evidence with prior expectations [2]. Implicit timing emerges indirectly through tasks like foreperiod paradigms, relying on neural or motor strategies without temporal awareness. Historically treated separately, explicit tasks engage cortico-striatal circuits, whereas implicit tasks involve cerebellar or parietal regions. We hypothesized that a unified Bayesian model with a shared internal clock parameter (θ) could bridge explicit and implicit timing abilities.
Methods
Forty-five psychology students performed four within-participant tasks: Explicit Motor (spontaneous motor response), Implicit Motor (simple reaction time), Explicit Temporal (interval reproduction), and Implicit Temporal (stimulus prediction). A hierarchical Bayesian model estimated an internal clock rate parameter (θ), reflecting subjective timing (θ=1 accurate; θ>1 slower; θ<1 faster clock), alongside parameters modeling task-specific variability and individual learning effects. Explicit tasks involved duration reproduction without feedback; implicit tasks involved temporal anticipation of a stimulus. Markov Chain Monte Carlo (MCMC) sampling via Stan was used for parameter estimation.
Results
The Bayesian model indicated participants’ internal clocks ran faster than objective time (μθ≈0.80), explaining interval overestimation at short durations and confirming a regression-to-mean effect. Individual differences in θ were significant (τθ≈0.20); participants with fewer practice trials had internal clocks closer to accuracy, indicating efficient learning. Explicit tasks had higher variability than implicit tasks, confirming greater cognitive uncertainty. Implicit tasks showed typical foreperiod effects (longer expected intervals slightly slowed reaction times,a≈0.3). Explicit and implicit timing shared moderate variance (r≈0.45), and network analysis suggested θ centrally bridged both timing domains.
Discussion
The findings support a unified Bayesian model, highlighting a shared internal clock mechanism underlying explicit and implicit timing. The internal clock parameter (θ) explained significant individual differences across tasks, supporting recent integrative views proposing partially overlapping neural substrates [3, 4]: a common cognitive mechanisms (maybe striatal-thalamo-cortical circuits) provide duration information utilized differently across explicit versus implicit tasks. Task-specific differences also comprise additional factors (e.g., cognitive strategies, attention and memory load) that future versions of the model should include. The model can be promising in explaining timing difficulties in clinical and aging populations too.



AcknowledgementsNone
References
1. https://doi.org/10.1016/j.neuropsychologia.2012.08.017
2. https://doi.org/10.1016/j.tics.2013.09.009
3. https://doi.org/10.1016/j.cobeha.2016.01.004
4. https://doi.org/10.1016/j.tins.2004.10.007
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P156: Population decoding of visual motion direction in V1 marmoset monkey : effects of uncertainty
Monday July 7, 2025 16:20 - 18:20 CEST
P156 Population decoding of visual motion direction in V1 marmoset monkey : effects of uncertainty

Alexandre C. Lainé*1, Sophie Denève1, Nicholas J. Priebe2, Guillaume S. Masson1, Laurent U. Perrinet1

1Institut de Neurosciences de la Timone, UMR 7289, CNRS - Aix-Marseille University, Marseille, France.
2Section of Neurobiology, School of Biological Sciences, University of Texas at Austin, Austin, TX, USA.

*Email: alexandre.laine@univ-amu.fr
Introduction

Studying the internal representation of information in the primary visual cortex (V1) is crucial to understand how we perceive the external world. Research on 2D motion direction in non-human primates [1,2,3] in particular when displaying naturalistic stimuli like MotionClouds [4] reveals substantial diversity and multiple mechanisms within the neuronal population [5]. This project aims to examine how a large population of V1 neurons encodes stimulus direction by explicitly titrating the precision in the orientation and spatial frequency domains.

Methods
Activity of several hundreds of neurons was recorded using Neuropixel 2.0 technology [6] in area V1 of an anesthetized marmoset monkey during which MotionClouds were presented for eight directions and two precision levels. We use a decoding method to analyze the representation of motion direction in the marmoset V1, focusing on the effects of uncertainty on the population code. The decoding method optimizes the weights of a logistic regression to achieve optimal decoding accuracy on a training set. Training can be conducted (1) on a broad time window, (2) by applying temporal generalization [7], or (3) after reducing dimensionality with dPCA [8].
Results
After training on broad windows, analysis of the optimised weights revealed two types of population representations: transient and sustained. These representations differ in their distributions across cortical layers, confirming earlier results obtained in another species [5], and are modulated by the level of orientation precision. The accuracy measured on the test set revealed first that a broad spatial frequency distribution leads to a better decoding performance, and second that the precision of the orientation is a critical factor in the representation of motion direction. Indeed, a high precision in orientation leads to the aperture problem, and thus to an ambiguity in the representation of motion direction. Temporal generalization confirm a stable representation. Projection of neuronal activity using dPCA onto 10 components without affect accuracy demonstrate that the information may be represented in a low-dimensional manifold.

Discussion
In summary, this decoding method clarifies how directional information is represented and modulated by precision in marmoset V1. The coexistence of transient and sustained representations indicates distinct functional roles across cortical layers. Temporal generalization confirms that the neuronal population maintains a stable encoding of direction. Reducing dimensionality while preserving precision implies that a small set of components can capture the essential features of neuronal activity, enabling the exploration of various projection methods to optimize decoding. Moreover, the results suggest that orientation precision could be a major factor in shaping the interplay between orientation and direction.




Acknowledgements
This work was supported by ANR-NSF CRCNS grant “PrioSens” N° ANR-20-NEUC-0002 attributed to G.S.M,
N.J.P. and L.U.P. and by a doctoral grant from the French Ministry of Higher Education and Research, awarded
by the Doctoral School 62 of Aix-Marseille University to A.L.
References[1] https://doi.org/10.1113/jphysiol.1959.sp006308
[2] https://doi.org/10.1113/jphysiol.1968.sp008455
[3] https://doi.org/10.1523/JNEUROSCI.1335-12.2012
[4] https://doi.org/10.1152/jn.00737.2011
[5] https://doi.org/10.1038/s42003-023-05042-3
[6] https://doi.org/10.1126/science.abf4588
[7] https://doi.org/10.1016/j.tics.2014.01.002
[8] https://doi.org/10.7554/eLife.10989
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P157: Non-monotonic Subthreshold Information Filtering in a Coupled Resonator-Integrator System
Monday July 7, 2025 16:20 - 18:20 CEST
P157 Non-monotonic Subthreshold Information Filtering in a Coupled Resonator-Integrator System

Franquelin Lambert1

1Université de Moncton, Département de physique et d'astronomie
Introduction:Subthreshold dynamics play a key role in spike generation, and it is well-known that some neurons exhibit a frequency preference when integrating subthreshold input– so-called resonators [1,2]. It has been shown, however, that despite the existence of subthreshold resonance, a single resonator neuron exhibits low-pass, i.e., monotonic, information filtering(as measured by the spectral coherence). In other words, in the subthreshold regime, band-pass impedance does not translate to band-pass information filtering. Instead, nonlinearities, such as spiking dynamics, are needed to create band-pass information transfer [3,4].



Methods:Here, we study a similar question in an electrically coupled pair of neurons. Our goal is to evaluate whether this resonance profile imparts non-trivial information filtering capabilities to the coupled system. We numerically simulate an electrically coupled integrate-and-fire to resonate-and-fire system in the subthreshold regime, and we investigate the stimulus-response spectral coherence function of the system under perturbation by coloured noise (Ornstein-Uhlenbeck process).

Results:For electrical coupling between a resonator and an integrator, we show that a Fano-like resonance profile appears in the impedance, i.e., a narrow, asymmetric peak with anti-resonance [5]. Moreover, we observe that the coherence function is non-monotonic, with a minimum around the frequency of the opposite neuron.

Discussion:This challenges the claim that neurons require nonlinearities to relay bandpass information filtering properties. This new perspective places informationfiltering in the context of connection motifs where a small number of resonators and integrators interact, rather than the context of individual neurons.





Acknowledgements
no acknowledgements
References
[1] Izhikevich, Eugene M.Dynamical systems in neuroscience. MIT press, 2007.
[2]https://doi.org/10.1016/S0893-6080(01)00078-8
[3]https://doi.org/10.1109/TMBMC.2016.2618863
[4]https://doi.org/10.1007/s10827-015-0580-6

[5]https://doi.org/10.1088/0031-8949/74/2/020
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P158: A unified model for estimating short- and long-term synaptic plasticity from stimulation-induced spiking activity
Monday July 7, 2025 16:20 - 18:20 CEST
P158 A unified model for estimating short- and long-term synaptic plasticity from stimulation-induced spiking activity

Arash Rezaei1,2, Mojtaba Madadi Asl3,4,Milad Lankarany*1,2,5


1Krembil Brain Institute, University Health Network, Toronto, ON, Canada
2Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
3School of Biological Sciences‎,‎Institute for Research in Fundamental Sciences (IPM)‎,‎Tehran‎,‎Iran
4Pasargad Institute for Advanced Innovative Solutions (PIAIS)‎,‎Tehran‎,‎Iran
5Center for Advancing Neurotechnological Innovation to Application (CRANIA), Toronto, ON, Canada



*Email:milad.lankarany@uhn.ca
Introduction

Abnormal brain activity is the hallmark of several brain disorders such as Parkinson’s disease, essential tremor, and epilepsy [1,2]. Stimulation-induced reshaping of the brain’s networks through neuroplasticity may disrupt neural activity as well as synaptic connectivity and potentially restore healthy brain dynamics. Synaptic plasticity has been the target of invasive therapies, such as deep brain stimulation [3-5]. Mathematical frameworks were able to estimate short-term [6,7] and long-term [8] synaptic dynamics separately. However, the characterization of both short and long-term synaptic plasticity from spiking activity is crucial for understanding the underlying mechanisms and optimization of spatio-temporal patterns of stimulation.


Methods
We developed a novel synapse model to integrate both short- and long-term plasticity into a unified framework wherein the postsynaptic neuron behaves according to both plasticity mechanisms. In the proposed model, the postsynaptic neuron is notably driven by the STP synaptic current and LTP synaptic weight at each step. To induce short- and long-term synaptic responses, presynaptic spike trains were applied for durations of a few hundred milliseconds (STP experiment) and hundreds of seconds (LTP experiment), respectively, to a single postsynaptic neuron. For the STP experiment, a single presynaptic spike train was used, whereas the LTP experiment involved 1000 presynaptic inputs. For both experiments depressing STP synapses were utilized.
Results
Our results demonstrated that in the STP experiment, the unified model produced the same transient fluctuations in the membrane potential of the postsynaptic neuron as observed in the STP-only model. This is evident when comparing Fig. 1.A and B as we see the same pattern of behavior in the postsynaptic membrane potential. In the LTP experiment, we observed similar long-term distribution of the synaptic weights as in the model with only long-term synapses (Fig. 1.C). However, the depression was more pronounced in the unified model due to the concurrent influence of STP and LTP on the postsynaptic neuron. The number of synapses with lower weights increases with the addition of the depressive STP mechanism compared to the LTP-only model.
Discussion
These findings suggest that the integration of STP and STDP within a single synaptic framework can effectively capture both transient and long-lasting plasticity effects. Furthermore, such uniform modeling of STP and LTP enables the incorporation of various combinations of synaptic settings into a population of neurons. This can potentially enhance the biological plausibility and flexibility of the current stimulation-induced neural models.




Figure 1. Fig. 1. Results of the STP and LTP experiments. A) Input spike train, neural and synaptic behavior of a model with only STP after stimulation. B) Behavior of the unified model with both STP and LTP after stimulation. The postsynaptic neuron was stimulated for 1200 ms with a 20 Hz firing rate and a depressing STP synapse (Red lines: postsynaptic membrane potential, Blue dotted lines: STP synaptic c
Acknowledgements
NA
References
1.https://doi.org/10.1016/j.neuron.2006.09.020
2.https://doi.org/10.1371/journal.pcbi.1002124
3.https://doi.org/10.1002/ana.23663
4.https://doi.org/10.1002/mds.25923
5.https://doi.org/10.1016/j.brs.2016.03.014
6.https://doi.org/10.1371/journal.pcbi.1008013
7.https://doi.org/10.1371/journal.pone.0273699
8.https://doi.org/10.1162/neco_a_00883

9.https://doi.org/10.7554/eLife.47314
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P159: Neun, an efficient and customizable open-source library for computational neural modeling and biohybrid circuit design.
Monday July 7, 2025 16:20 - 18:20 CEST
P159 Neun, an efficient and customizable open-source library for computational neural modeling and biohybrid circuit design.

Angel Lareo*1, Alicia Garrido-Peña1, Pablo Varona1, Francisco B. Rodriguez1

1Grupo de Neurocomputación Biológica, Departamento Ingeniería Informática, Universidad Autónoma de
Madrid

*Email: angel.lareo@uam.es
Introduction

Computational models are an effective and convenient tool for theoretically complementing the experimental
results obtained from living systems and thus understanding the brain’s complex functions. Computational
simulation of neural behavior has expanded the potential of modeling studies. There is a wide range
of tools available in Neuroscience community for this purpose [1–6]. They have enhanced the ability
of theoreticians to explain neural dynamics.Neunis a new highly customizable and fast running-time
open-source framework designed for theoretical studies in single neurons, small circuits, and biohybrid
circuit design [7–9]

Methods
Neun(github.com/gnb-UAM/neun) is an object-oriented library with heavily templated C++ . This ensures
high-level abstraction and encapsulation. Neun’s main components are: (i)ModelConcept, which provides
the foundation for synapses and neuron models (e.g. Hodgkin-Huxley and Izhikevich paradigms). (ii)
SystemWrapper, defines general elements such as parameters, variables, and numerical precision. (iii)
Integrator, offer methods like Euler and Runge-Kutta for numerical integration. (iv)DifferentialNeuronWrapper, combines models and integrators for simulation.Neunalso uses a straightforward method for
equation-to-code parsing to add new models and aims to provide compatibility with existing tools using a
Python API.
Results
As a complement to existing tools and databases,Neunprovides built-in samples of well-known neuron
and synapse models that can be easily adapted by the user for effective implementations. It can be used
as a template for fast prototyping, since it offers boilerplate code for novel modelers. Users can then go
from a black-box approach to the insides of the code. Nevertheless, the fact that the library is written in
C++ makes it an attractive option for real-time applications (such as RTXI or embedded systems), as it
demonstrates great single-threaded computing performance even without parallelization.Neunhas already
been used in previous modeling studies [7–9] and has been tested for its use in real-time experiments.
Discussion
We presentNeun, an open-source library in C ++ for computational neural modeling and simulation as a
user-friendly complement and alternative to existing tools. Among the numerous tools for neuron dynamics
simulation, there is a tendency of increasing complexity in the code base which limits its accessibility,
especially for beginners. We believeNeunhas a convenient compromise between usability and efficiency.
This can be ideal for researchers in neuroscience who do not necessarily have a background in computer
science but are willing to progressively learn, and also for experimentalist who want to build biohybrid
circuits form interacting living and model neurons and synapses.




Acknowledgements
This research was supported by grants PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00,
PID2021-122347NB-I00 (MCIN/AEI and ERDF – “A way of making Europe”).
References
[1]https://doi.org/10.1007/s10827-006-7949-5
[2]https://doi.org/10.3389/neuro.11.011.2008
[3]https://doi.org/10.1038/srep18854
[4]https://doi.org/10.7554/eLife.47314
[5]https://doi.org/10.1007/s10827-016-0623-7
[6]https://doi.org/10.1016/j.neuron.2019.05.019
[7]https://doi.org/10.3389/fninf.2022.912654
[8]https://doi.org/10.1007/978-3-031-34107-6_43
[9]https://doi.org/10.1117/1.NPh.11.2.024308
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P160: Preservation of neural dynamics across individuals during cognitive tasks
Monday July 7, 2025 16:20 - 18:20 CEST
P160 Preservation of neural dynamics across individuals during cognitive tasks

Ioana Lazar*1, Mostafa Safaie1, Juan Alvaro Gallego1


1Department of Bioengineering, Imperial College London, London, UK


*Email: ioana.lazar20@imperial.ac.uk
Introduction

Different individuals from the same species have brains that have similar organisation but differ in the details of their cellular architecture. Yet, despite these idiosyncrasies, the way in which neurons from the same region co-modulate their activity during a given motor task is remarkably preserved across individuals [1]. Such preserved neural population “latent dynamics” likely arise from the behavioural similarity as well as species-specific constraints on network connectivity. Here we asked whether cognitive tasks that can be solved using different covert strategies could lead to more individual-specific latent dynamics.


Methods
We investigated the preservation of latent dynamics in the prefrontal cortex across macaque monkeys performing an associative memory task in which they had to select the target associated with an initial cue following a “working memory period” in which no information was presented [2]. We computed session-specific latent dynamics using principal component analysis and tested their preservation across individuals using both canonical correlation analysis, which tests for similarity in the geometrical properties of neural population activity, and dynamical systems approaches. We interpreted the differences in the preservation of latent dynamics based on the differences in decoding accuracy of task variables.
Results
Prefrontal cortex latent dynamics were less preserved across individuals than inpreviousstudies of the motor system, especially during the working memory period, in which correlations were lower than during cue presentation and target selection. The level of preservation was strongly associated with how well the upcoming target's identity could be decoded, which varied across animals, hinting at potential different cognitive strategies as the cause for the lower preservation. Finally, monkeys developed idiosyncratic fidgets that reflected their cognitive processes: removing components of the latent dynamics related to movement decreased both within-monkey decoding of task variables and the preservation of latent dynamics across monkeys.
Discussion
This study builds on previous work on the motor system to show that different individuals from the same species also produce preserved latent dynamics when engaged in the same cognitive task. When the decoding analysis suggested that monkeys were employing different cognitive strategies to solve the task—relying more on retrospective or prospective memory—,the preservation of latent dynamics decreased, as it would be expected if the latent dynamics reflected the underlying computations. Neural population latent dynamics can thus capture fundamental differences and similarities in neural computation across individuals during both sensorimotor and cognitive processes.





Acknowledgements

References
1. Safaie, M., Chang, J., Park, J., Miller, L. E., Dudman, J. T., Perich, M. G., & Gallego, J. A. (2023). Preserved neural dynamics across animals performing similar behaviour.Nature, 623, 765–771. https://doi.org/10.1038/s41586-023-06714-0
2. Tremblay, S., Testard, C., DiTullio, R. W., Inchauspé, J., & Petrides, M. (2022). Neural cognitive signals during spontaneous movements in the macaque.Nature Neuroscience, 26, 295–305. https://doi.org/10.1038/s41593-022-01220-4
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P161: Core-Peripheral Network Topology Facilitates Dynamic State Transitions in the Computational Modeling of Zebrafish Brain
Monday July 7, 2025 16:20 - 18:20 CEST
P161 Core-Peripheral Network Topology Facilitates Dynamic State Transitions in the Computational Modeling of Zebrafish Brain


Dongmyeong Lee*1,3,Yelim Lee1,2, Hae-Jeong Park1,2,3



1Yonsei University College of Medicine, Seoul, South of Korea

2BK21 PLUS Project for Medical Science, Yonsei University College of Medicine, Seoul, South of Korea

3Center for Systems and Translational Brain Science, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, South of Korea


Email: dmyeong@gmail.com





Introduction



Understanding how structural network topology shapes large-scale neural dynamics is a fundamental challenge in neuroscience. In particular, core-peripheral network topology is a crucial property, where highly connected "core" regions serve as hubs for integrating information across the brain, while sparsely connected "peripheral" regions support localized processing. Although many studies have explored the influence of core-peripheral topology on brain function at the macro-scale, the relationship between core-peripheral connectivity and dynamic information processing at the cellular level remains an open question. In this study, we investigate the impact of core-peripheral connectivity on whole-brain neural dynamics in zebrafish using computational modeling by integrating cellular-resolution structural connectivity data with a large-scale spiking neural network model.
Methods
To achieve this, we reconstructed a cellular-resolution structural connectivity network and extended it to develop a large-scale spiking neural network model consisting of 50,000 neurons across 72 distinct brain regions in the zebrafish brain. By systematically varying core-peripheral connection probabilities and coupling constants in the computational model, we examined their effects on neural activity fluctuations.
Results
Our results demonstrate that the zebrafish brain exhibits a distinct core-peripheral network structure, where core regions play a critical role in dynamic signal propagation and network reconfiguration by examining cellular connectivity data. Analysis of calcium imaging data revealed that the zebrafish brain dynamically transitions between multiple states, enabling adaptive and efficient information processing. Among four different connection types, i.e., peripheral-peripheral, core-peripheral, peripheral-core, and core-core, core-to-peripheral connections exhibited the highest functional fluctuations, closely mirroring experimentally observed calcium imaging data.

Discussion

These findings highlight that core-peripheral connectivity serves as a key structural mechanism regulating state transitions, optimizing the balance between network modularity and integration. This suggests that large-scale brain networks leverage core-peripheral topology to dynamically regulate state transitions and maintain optimal neural computation. By integrating experimental data with computational modeling, this study provides novel insights into how structural connectivity underlies large-scale neural computations and functional flexibility in the brain.






Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NO. 2023R1A2C200621711)
References
Chen, X., Mu, Y., Hu, Y., Kuan, A. T., Nikitchenko, M., Randlett, O., ... & Ahrens, M. B. (2018). Brain-wide organization of neuronal activity and convergent sensorimotor transformations in larval zebrafish.Neuron,100(4), 876-890

He, B. J., Zempel, J. M., Snyder, A. Z., & Raichle, M. E. (2010). The temporal structures and functional significance of scale-free brain activity.Neuron,66(3), 353-369
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P162: Pavlovian Conditioning of a Superburst Generating Neural Network for High-precision Perception of Spatiotemporal Sensory Information
Monday July 7, 2025 16:20 - 18:20 CEST
P162 Pavlovian Conditioning of a Superburst Generating Neural Network for High-precision Perception of Spatiotemporal Sensory Information

Kyoung J. Lee*¹, Jongmu Kim², Woojun Park¹, Inhoi Jeong¹

¹ Department of Physics, Korea University, Seoul, Korea
² Department of Mechanical Engineering, Korea University, Seoul, Korea


*Email:kyoung@korea.ac.kr
Introduction

How the brain perceives, learns, and distinguishes different spatiotemporal sensory information remains a fundamental yet largely unresolved question in neuroscience [1]. This study demonstrates how an initially random network of Izhikevich neurons can learn, encode, and differentiate time intervals ranging from milliseconds to tens of milliseconds with high temporal precision using a Pavlovian conditioning framework [2]. Notably, our findings highlight the potential role of superbursts in sensory perception, offering new insights into how neural circuits process temporal information.


Methods
Our network model comprises excitatory and inhibitory neurons with synaptic weights evolving through dopamine-modulated spike-timing-dependent plasticity. The conditioning protocol involves sequential electrical stimulation of, for example, three neuron subpopulations (S0, S1, S2) with specific time intervals (Dt1cond.,Dt2cond.), referred to as “target triplet stimulation.” Despite the presence of various distracting stimuli with different time intervals, the network successfully encodes the target stimulation pattern and later responds to it by generating a distinctive population burst—a neuronal spiking avalanche—which acts as a test gauge for perception.

Results
During conditioning, the initially random network evolves into a feedforward structure [3] (Fig. 1A), where three subpopulations (S0, red; S1, blue; S2, green) self-organize according to the imposed time intervals (Dt1cond.,Dt2cond.), effectively encoding temporal information into its network morphology. With axonal conduction delays, the network generates superbursts, featuring multiple sub-burst humps, lasting tens of milliseconds (Fig. 1B). In a perception test, stimuli with varying time intervals and subpopulations produce distinct neuronal avalanches: For example, a network conditioned forDt1cond.=Dt2cond.= 11 ms exhibits systematically varying burst patterns upon receiving different stimulli (Fig. 1B and 1C).



Discussion
These findings provide insight into how seemingly simple neural circuits can encode and process temporal information through structured population spiking activity. Perception in this system can utilize the shape of stimulus-triggered population bursts, allowing for superb temporal resolution (< 1 ms). Furthermore, incorporating axonal conduction delays enables the network to generate superbursts lasting tens of milliseconds, with intricate internal temporal structures, significantly enhancing its perceptual dynamic range. This learning framework can be extended to distinguish much more complex spatiotemporal sequences beyond the simple triplet examples explored in this study.





Figure 1. Fig. 1 Encoding different sets of (Dt_1^cond., Dt_2^cond.) into network morphology (A) and perceptual testing with various (Dt_1^test, Dt_2^test) combinations (B) and subpopulations (C) for the case of (Dt_1^cond.= Dt_2^cond. = 11 ms). In (A), the colored crossbars mark the centroids of S0, S1, and S2 , reflecting the topographic encoding of temporal information (six different cases are shown).
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2024-00335928).
References
1.https://doi.org/10.1016/j.neuron.2020.08.020
2.https://doi.org/10.1093/cercor/bhl152
3.https://doi.org/10.1371/journal.pcsy.0000035
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P163: Distinct disinhibitory circuits differentially modulate the multisensory integration in a feedforward network
Monday July 7, 2025 16:20 - 18:20 CEST
P163 Distinct disinhibitory circuits differentially modulate the multisensory integration in a feedforward network

Seung-Youn Lee*1,2, Kyujin Kang1,3, Yebeen Yoon1,3, Jae-Ho Han2,3, Hyun Jae Jang1


1Korea Institute of Science and Technology, Seoul, Republic of Korea
2Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
3Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea


*Email: seungyounlee@korea.ac.kr

Introduction

Multisensory integration is a fundamental neural process that combines simultaneously presented unisensory inputs into a unified perception. For effective multisensory processing, cross-modal integration via neural network should be dynamically modulated spanning cortical and subcortical regionsin vivo[1,2]. One such mechanism is the disinhibitory circuit gating local information flow by inhibiting other inhibitory neurons. However, it is unclear whether disinhibitory circuits modulate multisensory integration locally or via long-range projections[3,4]. Therefore, we investigated how distinct disinhibition architectures differentially modulate long-range cross-modal integration, such as between the primary auditory cortex (A1) and the visual cortex (V1)[5].


Methods
To test this, we developed a computational feedforward network model incorporatingin vivo-recorded spike trains from A1 and V1. The model consists of two four-layer columns, each representing a different sensory modality, converging onto an output layer (LOUT). Neurons were modeled as single-compartment Hodgkin-Huxley-type neurons, capturing the electrophysiological properties of pyramidal (PYR), somatostatin-positive (SST+), and vasoactive intestinal polypeptide-positive (VIP+) neurons. The disinhibitory circuit was modeled such that VIP+inhibit SST+, which in turn inhibit PYR. The first layer of each column received spike trains recordedin vivoA1 and V1 during pure-tone and grating stimulation as inputs.

Results
We first assessed disinhibitory circuits in multisensory integration. A network with disinhibition exhibited higher mutual information (MIrate) between stimulus variables and firing rates of Loutthan one without, indicating enhanced integrated information transmission. To investigate how disinhibition-mediated different inhibitory circuit modulates multisensory integration, we differentiated SST+ inhibitory circuits by intra-columnar feedback (intra-FBI), intra-columnar feedforward (intra-FFI), and cross-columnar feedforward inhibition (cross-FFI). When we fedin vivospike patterns into these models, we found that MIratewas highest in intra-FFI, whereas MI for spike timing was highest with intra-FBI, implying distinct roles in neural coding.

Discussion
Our results demonstrate that disinhibitory circuits facilitate multisensory integration by dynamical modulation of long-range cross-modal interactions between A1 and V1. Specifically, our findings reveal that the intra-FFI circuit was associated with firing rates whereas intra-FBI circuit enhanced information encoded in spike timings. This suggests that distinct disinhibitory circuits selectively integrate multisensory information through different neural coding strategies. Taken together, these findings indicate that distinct disinhibitory network motifs dynamically modulate multisensory integration and may serve as a key mechanism inin vivomultisensory processing.





Acknowledgements
This research was supported by the KIST Institutional Program (2E33561) and the National R&D Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2021R1C1C2012843). J.-H. Han was supported by the MSIT, Korea, under the ITRC support program (IITP-2025-RS-2022-00156225) supervised by the IITP and by the NRF grant (No. RS-2024-00415812).
References
1. https://doi.org/10.1038/ncomms12815
2. https://doi.org/10.1016/j.conb.2018.01.002
3. https://doi.org/10.1016/j.tins.2021.04.009
4. https://doi.org/10.1007/s10827-017-0669-1

5. https://doi.org/10.1016/j.neuron.2016.01.027
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P164: Event-Driven Financial Decision Making via Spiking Neural Networks: Neuromorphic-Inspired Approach
Monday July 7, 2025 16:20 - 18:20 CEST
P164 Event-Driven Financial Decision Making via Spiking Neural Networks: Neuromorphic-Inspired Approach

Tae-hoon Lee1, Hoon-hee Kim*2
1 Department of Data Engineering, Pukyong National University, Busan, South Korea
2 Department of Computer Engineering and Artificial Intelligence, Pukyong National University, Busan, South Korea
*Email: h2kim@pknu.ac.kr
Introduction

Spiking Neural Networks (SNNs) are well-suited for financial decision-making due to their ability to capture temporal dynamics and process information in an event-driven manner. In volatile markets, price movements can be sudden and irregular, making asynchronous event-based processing critical for timely responses. SNNs naturally handle such inputs, modeling temporal patterns more effectively than traditional neural networks. In this study, we integrate SNNs with a Genetic Algorithm (GA) for feature selection and parameter optimization, and a Support Vector Machine (SVM) for decision-making. This pipeline leverages the adaptive, event-driven processing of SNNs to improve stock market prediction and trading decisions.

Methods
For our experiments, we used historical data from the top 20 S&P 500 stocks, encompassing bull, bear, and volatile market conditions. Price data were transformed into multiple technical indicators (e.g., moving averages, RSI). A GA then optimized the indicator parameters and selected the most predictive features. Next, the time-series features were encoded into spike trains via rate coding with a fixed time window and fed into an SNN composed of Leaky Integrate-and-Fire neurons. The SNN processed the temporal patterns, and its spiking outputs were summarized (e.g., as spike counts over time). These features were then passed to an SVM for final classification of the trading action.

Results
In backtesting, the SNN-based framework surpassed the buy-and-hold strategy across multiple market regimes, demonstrating higher predictive accuracy and stronger trading returns. This performance gap was especially evident during volatile market phases, where passive buy-and-hold approaches often struggled to adapt. By capitalizing on the event-driven nature of spiking neurons, our system reacted swiftly to abrupt price swings, refining its signals in real time and thus helping to mitigate slippage and transaction costs. Overall, these findings highlight the neuromorphic framework’s resilience and effectiveness, suggesting it can outperform simpler investment strategies under diverse market conditions.

Discussion
This work demonstrates the potential of neuromorphic computing in financial decision-making. The SNN-based approach offers adaptive, event-driven processing suited to volatile markets, while its reservoir-like architecture (with only the output classifier trained) reduces computational complexity. In addition, the model exhibits robustness to noisy market data and regime shifts. However, limitations remain: the approach relies on a predefined rate-coding scheme, and the hybrid design combining a spiking network with an external classifier is not end-to-end. Future research can explore improved encoding methods and end-to-end spiking models, as well as deployment on neuromorphic hardware for faster, energy-efficient execution.





Figure 1. Flowchart: Stock data and optimized technical indicators are converted into spike trains for the spiking neural network (SNN), whose outputs feed into a classifier for trading decisions
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
[1] Maass, W. (1997). Networks of Spiking Neurons: The Third Generation of Neural Network Models.Neural Networks, 10(9), 1659–1671. https://doi.org/10.1016/S0893-6080(97)00011-7
[2] Holland, J. H. (1992). Genetic algorithms.Scientific American, 267(1), 66–73.http://www.jstor.org/stable/24939139
[3] Lin, X., Yang, Z., & Song, Y. (2011). Intelligent stock trading system based on improved technical analysis and Echo State Network.Expert Systems with Applications, 38(9), 11347–11354. https://doi.org/10.1016/j.eswa.2011.03.001


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P165: Incorporation of Neuromodulation into Predictive-Coding Spiking Neural Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P165 Incorporation of Neuromodulation into Predictive-Coding Spiking Neural Networks

Yelim Lee1, Dongmyeong Lee1, Hae-Jeong Park*1,2,3,4


1Department of Nuclear Medicine, Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul, Republic of Korea
2 Department of Nuclear Medicine, Severance Hospital,Seoul, Republic of Korea
3Department of Cognitive Science, Yonsei University, Seoul, Republic of Korea
4Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea


*Email: parkhj@yonsei.ac.kr

Introduction

Neuromodulation is often considered to enhance the selection mechanism in the brains of living organisms, prioritizing processing of inputs relevant to their goals. By modifying effective synaptic strength and altering firing properties, neuromodulators engage various cellular mechanisms, leading to a dynamic reconfiguration of neural circuits. Adjusting a target neuron’s excitability is one mechanism for enabling attentional effects. This research explores how this mechanism enhances predictive coding and learning in a spiking neural network (SNN) with two-compartment neurons, focusing on classification ability and internal representations in hidden layers with top-down signals.

Methods
The network includes one input layer, one output layer, and three fully connected hidden layers with feedback and feedforward connections. The dynamics of the hidden neurons are based on the Adaptive Leaky-Integrate-and-Fire (ALIF) model from Zhang and Bohte's previous work. The dendritic compartment of the hidden neurons integrate inputs from higher regions, and the somatic compartment integrates input from lower areas. To implement the neuromodulation effect on the hidden neurons, we introduced a new top-down attention connection from the higher layer to the lower layer. This adjustment enables modifying the target neuron’s excitability by dynamically altering the baseline firing threshold. We used a spiking MNIST image as input data, modifying the original MNIST dataset to provide spiking input over time. Additionally, we created multiple variations of the MNIST dataset, introducing noise or making occluded or overlapped images, to provide the ambiguous context.
Results
We performed image classification tasks with the MNIST dataset, achieving a high accuracy for the original set and highly noisy test data set. We analyzed the uncertainty of output neurons by tracking their membrane potential for each digit class, noting increased firing for the correct class despite initial uncertainty. To assess predictive coding, we evaluated each hidden layer's internal representation by decoding the spiking activity. This involved no inputs or half-occluded inputs while clamping the output neuron’s membrane potential to a specific class. The results showed successful digit representation in spiking activities, especially with applied modulation weights, compared to the previous model.
Discussion
Clarifying important information in uncertain contexts improves with appropriate attention and prediction. This study suggests that neuromodulation enhances hierarchical encoding and learning in SNN during ambiguous scenarios. The model maintained high classification accuracy even in noisy and occluded conditions, and the internal representation, along with reduced uncertainty of output neurons, aligns with predictive coding principles, where top-down modulation refines internal representations.



Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NO. 2023R1A2C200621711)
References
1.Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1), 193-222.
2.Marder, E. (2012). Neuromodulation of neuronal circuits: back to the future. Neuron, 76(1), 1-11.
3.Thiele, A., & Bellgrove, M. A. (2018). Neuromodulation of attention. Neuron, 97(4), 769-785.
4.Zhang, M., & Bohte, S. M. (2024). Energy Optimization Induces Predictive-coding Properties in a Multicompartment Spiking Neural Network Model. bioRxiv, 2024-01.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P166: Leveraging neural modeling of channelopathies to elucidate neural mechanisms underlying neurodevelopmental disorders
Monday July 7, 2025 16:20 - 18:20 CEST
P166 Leveraging neural modeling of channelopathies to elucidate neural mechanisms underlying neurodevelopmental disorders

Molly Leitner*1, Roman Baravalle1,James Chen1,Timothy Fenton3, Roy Ben-Shalom3, Salvador Dura-Bernal1,2


1Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
2Center for Biomedical Imaging & Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA
3Neurology Department, University of California Davis, Davis, CA, USA

*Email: molly.leitner@downstate.edu
Introduction

Neurodevelopmental disorders (NDDs), such as epilepsy, autism spectrum disorder, and developmental delays, present with considerable clinical variability and often impair social interactions, speech, and cognitive development. A key feature of these disorders is an imbalance in excitatory/inhibitory (E/I) input, which disrupts neuronal circuit function during development. Brain channelopathies, where neuronal ion channel activity is altered, provide an ideal model for studying E/I imbalance, as their effects can be directly linked to neuronal excitability. Ion channels are crucial in generating electrical activity in neurons, and disruptions to this activity are strongly associated with NDDs [1].

Methods
Studying channelopathies at the single-cell level is well-established, however, investigating the impact of specific channel mutations on neuronal circuits requires more complex approaches.By utilizing a previously developed primary motor cortex (M1) model built using NetPyNE and NEURON, we employ large-scale, highly detailed biophysical neuronal simulations to examine how channel mutations influence individual and network neuronal activity [2].


Results
These simulations offer a mechanistic understanding of how channelopathies contribute to E/I imbalance and the pathology of NDDs. Through the M1 cortical column simulation, we measure the effects of biophysical changes in ion channels on network excitability and neuronal firing patterns, providing insights into the pathophysiology of simulated channelopathies.
Discussion
This model not only serves as a tool for investigating specific channelopathy cases but also enables the exploration of pharmacological agents aimed at restoring E/I balance. Ultimately, this approach will enhance our understanding of targeted therapeutic strategies for alleviating disease symptoms and may uncover novel treatments with clinical potential.




Acknowledgements
This work was supported by the Hartwell Foundation through an Individual Biomedical Research Award. The authors gratefully acknowledge the foundation’s commitment to innovative pediatric research and its generous support of our project.
References
● Spratt PWE, Ben-Shalom R, Keeshen CM, Burke KJ Jr, Clarkson RL, Sanders SJ, Bender KJ. The Autism-Associated Gene Scn2a Contributes to Dendritic Excitability and Synaptic Function in the Prefrontal Cortex. Neuron. 2019 Aug 21;103(4):673-685.e5. doi: 10.1016/j.neuron.2019.05.037.
● Dura-Bernal S, Neymotin SA, Suter BA, Dacre J, Moreira JVS, Urdapilleta E, Schiemann J, Duguid I, Shepherd GMG, Lytton WW. Multiscale model of primary motor cortex circuits predicts in vivo cell-type-specific, behavioral state-dependent dynamics. Cell Rep. 2023 Jun 27;42(6):112574. doi: 10.1016/j.celrep.2023.112574.








Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P167: Active NMDARs expand input rate sensitivity into high-conductance states
Monday July 7, 2025 16:20 - 18:20 CEST
P167 Active NMDARs expand input rate sensitivity into high-conductance states

Movitz Lenninger*1, Pawel Herman1, Mikael Skoglund1, Arvind Kumar1

1 School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
*Email: movitzle@kth.se

Introduction

A single cell has thousands of synapses distributed across the surface, predominantly along the dendritic tree [1]. Thus, in an active state, thousands of inputs can target a single cell leading to what is known as the high conductance state [2]. During such states, both input resistance and the effective membrane time constant are markedly reduced [3]. Paradoxically, high-conductance states can also lead to a reduction of postsynaptic activity [4,5]. Here, we show, using single-cell simulations of thick-tuft layer 5 (TTL5) pyramidal cells, that the voltage dependence of NMDA receptors (NMDARs), a ubiquitous feature in the brain, can increase excitability in high-conductance states – providing sensitivity to a larger range of inputs.


Methods
We simulated a previously published reconstructed morphology of a rat TTL5 pyramidal cell [6]. We randomly distribute 5000 excitatory and 2500 inhibitory synapses uniformly according to the membrane surface areas of the dendritic segments (Figure 1a). Inputs are sampled from independent Poisson processes. In all cases, we optimize the inhibitory input rate to keep the somatic potential fluctuating around -60 mV. To study the role of active NMDARs, we consider three scenarios: synapses contain (1) only AMPA receptors (AMPARs), (2) both AMPARs and active NMDARs, and (3) both AMPARs and passive NMDARs. Unless otherwise stated, we use an NMDA-AMPA ratio of 1.6. In all cases, the integrated conductance per input is normalized to ~5.9 nS∙ms.

Results
First, we compare the input resistances across three input conditions. The input resistance decreases with increasing inputs for all three synaptic types but is consistently larger with active NMDARs (Figure 1b). Second, we compare the output firing rates (FRs) across a large range of inputs. For low and intermediate inputs, the output FRs are similar across all synaptic types (Figure 1c). However, for high inputs, output gain is only maintained with active NMDARs. Furthermore, the coefficient of variation of the interspike intervals is typically higher for active NMDARs, indicating more irregular firing (Figure 1d). Third, varying the NMDA-AMPA ratio reveals that this is a graded property of active NMDARs (Figure 1e-f).

Discussion
A key property of dendrites is to integrate pre-synaptic inputs. Active conductance can significantly alter the summation compared to passive dendrites [1]. Previous studies have, for example, linked active NMDARs to increased sequence discrimination [7] and increased coupling between tuft and soma [8]. Our work suggests active NMDARs might also be crucial for maintaining large postsynaptic activity under high input conditions, expanding the range of input sensitivity. Our work does not exclude the possibility of intrinsic voltage-gated ion channels further contributing to increased excitability under presynaptic activity [9]. It remains to study the possible interaction of such intrinsicconductanceswith active NMDRs.




Figure 1. a) Morphology of cell with 500 randomly distributed synapses. b) Estimated input resistances during three input conditions. c) Input-output transfer function of firing rates (lines). Shaded areas show the standard deviation (across bins of 1 second). d) CVs of the ISIs. Color codes in panels c-d) same as in b). e-f) Output firing rates and CVs for a range of NMDA-AMPA ratios with active NMDARs.
AcknowledgementsN/A
References
[1]https://doi.org/10.1146/annurev.neuro.28.061604.135703
[2]https://doi.org/10.1038/nrn1198
[3]https://doi.org/10.1073/pnas.88.24.11569
[4]https://doi.org/10.1523/JNEUROSCI.3349-03.2004
[5]https://doi.org/10.1103/PhysRevX.12.011044
[6]https://doi.org/10.1371/journal.pcbi.1002107
[7]https://doi.org/10.1126/science.1189664
[8]https://doi.org/10.1038/nn.3646
[9]https://doi.org/10.1016/J.NEURON.2020.04.001








Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P168: Evaluating Effective Connectivity and Control Theory to Understand rTMS-Induced Network Effects
Monday July 7, 2025 16:20 - 18:20 CEST
P168 Evaluating Effective Connectivity and Control Theory to Understand rTMS-Induced Network Effects

Riccardo Leone*1,2,3, Michele Allegra4, Xenia Kobeleva1,2
1 Computational Neurology Group, Ruhr University Bochum, 44801, Bochum, Germany.
2 Faculty of Medicine, University of Bonn, 53127, Bonn, Germany.
3 German Center for Neurodegenerative Diseases (DZNE), 53127, Bonn, Germany.
4 Padova Neuroscience Center, University of Padova,35129,Padova, Italy

* Email: riccardoleone1991@gmail.com
Introduction

Computational neuroscience might contribute to a better understanding of neurostimulation by modeling its effects on brain networks. Effective connectivity (EC) and EC-based network control theory could provide a theory-driven framework for elucidating neurostimulation-induced network effects [1]. We thus tested whether EC and control energy could explain changes in resting-state fMRI (rs-fMRI) metrics induced by repetitive transcranial magnetic stimulation (rTMS). We hypothesized that EC and control energy would outperform functional connectivity (FC) and structural connectivity (SC) in explaining rTMS effects.


Methods
Twenty-one subjects received inhibitory 1Hz rTMS (20 min) at frontal, occipital or temporo-parietal sites, with rs-fMRI acquired pre- and post-stimulation. Whole-brain EC was estimated using regression Dynamic Causal Modeling. Control energy from the stimulated node (i.e., driver node) to each downstream target node was computed from the EC model. We quantified rTMS effects at a node level as pre- vs post- changes in: i) FC with the driver region, ii) amplitude of low-frequency fluctuations (ALFF), and iii) nodal FC strength with the whole brain. We correlated these changes with a series of pre-stimulus predictors: SC, FC, EC between each target and the driver node, and energy needed to control each target from the driver node.

Results
rTMS generally reduced whole-brain FC with each stimulated driver node, as well as ALFF, and nodal FC strength, with frontal stimulation yielding more widespread effects. EC and control energy showed significant correlations with the change in FC with the driver node and nodal FC strength. Nonetheless, significant associations of similar or greater magnitude were also observed with simple FC, thus failing to demonstrate a clear advantage of EC and EC-based control energy to evaluate rTMS-induced effects. Changes in ALFF were not significantly correlated with any pre-TMS variable.
Discussion
Contrary to our main hypothesis, EC and EC-based control energy did not provide significantly better explanations of 1Hz rTMS-induced changes compared to model-agnostic FC. Our results question the current utility of EC and EC-based control theory models for understanding the effects of 1-Hz rTMS on brain networks. Given the complex interplay of neurobiological processes induced by rTMS that are not directly linked to the network spread of TMS pulses (e.g., synaptic plasticity), future work should implement EC and EC-based control energy to explain the effects of simpler protocols of neurostimulation.




Acknowledgements

References
1. Manjunatha KKH, Baron G, Benozzo D, Silvestri E, Corbetta M, Chiuso A, et al. (2024) Controlling target brain regions by optimal selection of input nodes.PLoS Comput Biol20(1): e1011274. https://doi.org/10.1371/journal.pcbi.1011274
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P169: Anesthesia modulates the system-wide contributions of identified head neurons in C. elegans
Monday July 7, 2025 16:20 - 18:20 CEST
P169 Anesthesia modulates the system-wide contributions of identified head neurons in C. elegans

Avraham Lepsky*¹, Andrew Chang², Chris Connor³, Chris Gabel³


¹ Graduate Program for Neuroscience, Boston University, Boston, United States
² Graduate Program in Physiology, Boston University, Boston, United States
³ Department of Biophysics, Boston University, Boston, United States


*Email: avil@bu.edu
Introduction

While anesthesia has similar effects in the brains of animals ranging from the nematode C. elegans to humans, the mechanism by which various anesthetic agents work remains largely unknown. C. elegans has been identified as a tractable model for studying anesthesia due to progressing behavioral deficits from increased anesthesia concentration, genetic susceptibility analogous to mammals, and an annotated neuroconnectome [1]. Isoflurane is a volatile anesthetic that induces general anesthesia; previous work has found that isoflurane anesthesia in C. elegans caused marked dyssynchrony of neuron dynamics (as measured by a decrease in the cumulative variance explained by the top 3 principal components in neuronal activity) [2].

Methods
We employed C. elegans worms expressing the NeuroPAL transgene, providing a fluorescent color map for identification of neurons within the known connectome of the C. elegans nervous system [3]. Using light sheet microscopy performed by a dual inverted selective plane illumination microscope (DISPIM), we measured activity of 120 individual head neurons of the NeuroPAL worms via fluorescence imaging of the calcium sensitive GCaMP reporter. We imaged for 20 minutes at 2Hz. We performed principal component analysis (PCA) on the measured 120 neuron activity dynamics, following previous attempts at ascribing a neural manifold to C. elegans behavior [4] with the added information of neuron identification.
Results
Analysis of neuronal activity across worms at various isoflurane levels identified 10 neurons with a statistically significant change in PCA magnitude between 0 and 2% isoflurane and 17 neurons between 0 and 4%. No obvious receptor or functional identity marker was shared by all statistically significant neurons.
Discussion
We identified a list of neurons whose contributions to the system’s activity are most significantly modulated by changing isoflurane concentration. Because the connectome of C. elegans has been established, the anatomical properties of the neurons can be compared to their functional properties to establish a mechanistic understanding of the systemic changes induced by isoflurane. Connectomic spiking neuron models and other biophysical models can then be used to make predictions linking the molecular and behavioral properties of anesthetic agents.




Acknowledgements
Thank you to the Graduate Program of Neuroscience, under the direction of Dr. Shelly J. Russek and Sandi Grasso, for providing such a nurturing community.
Funding was generously awarded through a T32 grant.
References

Rajaram, S., … & Morgan, P. G. (1999). A stomatin and a degenerin interact to control anesthetic sensitivity in Caenorhabditis elegans. Genetics, 153(4), 1673–1682.

Awal, M. R., … & Connor, C. W. (2020). The collapse of global neuronal states in C. elegans under isoflurane anesthesia. Anesthesiology, 133(1), 133.

Yemini, E., ... & Hobert, O. (2021). NeuroPAL: a multicolor atlas for whole-brain neuronal identification in C. elegans. Cell, 184(1), 272-288.

Kato, S., ... & Zimmer, M. (2015). Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Cell, 163(3), 656-669.


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P170: Competition between symmetric and antisymmetric connections in cortical networks
Monday July 7, 2025 16:20 - 18:20 CEST
P170 Competition between symmetric and antisymmetric connections in cortical networks

Dong Li*1,Claus C. Hilgetag1,2

1Institut für Computational Neuroscience, Universitätsklinikum Hamburg-Eppendorf (UKE), 20246 Hamburg, Germany

2Department of Health Sciences, Boston University, 02215 Boston, USA


*Email: d.li@uke.de

Introduction

The pairwise correlation of neural activity directly and significantly influences neural network performance across various cognitive tasks [1, 2]. While tasks such as working memory require low correlation levels [3], others, like motor actions, rely on higher correlation levels [1]. These correlation patterns are highly sensitive to network structure and neural plasticity [4-6]. However, understanding how neural networks dynamically balance tasks with differing correlation demands, and how distinct brain networks are structurally optimized for specific functions remains a major challenge.


Methods
We simulate linear and spiking models to investigate the impact of symmetric and antisymmetric connections on neural network dynamics. The linear model, equipped with a control parameter that adjusts the relative intensity of these connections, captures fundamental mechanisms, which shape pairwise correlations and influence network performance across cognitive tasks. To quantify the competition between symmetric and antisymmetric connections, we introduce two indices from global and local perspectives. Using these indices, we further examine how synaptic plasticity modulates the relative intensity of these connections. Finally, we employ the spiking model to explore how bio-plausible neural networks implement this competition.
Results
Antisymmetric connections naturally reduce pairwise correlations, facilitating cognitive tasks that require maximal information processing, such as working memory. In contrast, symmetric connections enhance pairwise correlations, supporting other functions, such as enabling the network to generate reliable responses to external inputs. The competition between antisymmetric and symmetric connections can be easily modulated by spike-timing-dependent plasticity (STDP) with antisymmetric and symmetric kernels, respectively. In bio-plausible networks, this competition is particularly shaped by the structured, non-random organization of excitatory and inhibitory connections.
Discussion
Every connection matrix can be decomposed into symmetric and antisymmetric components with varying relative intensities. This work reveals how the competition between these components modulates neural correlations and facilitates distinct functions. Temporally, this competition is dynamically regulated by synaptic plasticity. Spatially, in comparison to indirect experimental evidence, our analysis actually also allows the discussion of the layer-specific distribution of these relative intensities. These findings provide a new perspective on how brain functions are segregated across both time and space.




Acknowledgements
This work was in part founded by DFGTRR-169 (A2) andSFB 936 (A1/Z3).
References
[1]https://doi.org/10.1038/nrn1888
[2]Von Der Malsburg, C. (1994). The correlation theory of brain function. In Models of neural networks: Temporal aspects of coding and information processing in biological systems (pp. 95-119). New York, NY: Springer New York.
[3]https://doi.org/10.1016/j.dcn.2025.101541
[4]https://doi.org/10.1016/0893-6080(94)00108-X
[5]https://doi.org/10.1126/science.1211095
[6]https://doi.org/10.1162/NECO_a_00451
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P171: Partial Information Decomposition of amplitude and phase stimulus encoding in oscillator models
Monday July 7, 2025 16:20 - 18:20 CEST
P171 Partial Information Decomposition of amplitude and phase stimulus encoding in oscillator models

V. LIMA¹*, D. MARINAZZO², A. BROVELLI¹,

1. Institut de neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
2. Faculty of Psychology and Educational Sciences, Department of Data Analysis, University of Ghent, Ghent, Belgium


* vinicius.lima.cordeiro@gmail.com

Introduction

Synchrony between oscillatory activity is thought to be the primary mechanism enabling widespread cortical regions to route information [1]. Such a mechanism would require either oscillations between a target and a sending area to increase their amplitude while maintaining a stable phase relationship or to shift their phase difference when a stimulus is presented [2]. Nonetheless, whether the “communication” established between the pair of areas can be used to encode stimulus-specific information remains unclear.

Methods
To address this question, we construct a whole-brain model in which nodes are connected using macaque structural connectivity [3], and their dynamics are governed by the Stuart-Landau (SL) model [4]. The SL model describes nonlinear oscillators near a Hopf bifurcation and models both the evolution of their amplitude and phase terms. In addition to enabling the characterization of interactions in terms of phase and/or amplitude, the distance to the Hopf bifurcation is controlled by a single parameter,a, which determines the stability of the oscillations:a< 0 leads to transient oscillations, whereasa≥ 0 results in stable oscillations, allowing to explore the role of both types of activity in stimulus encoding [5].To disentangle phase and amplitude encoding in the model, we use the framework of partial information decomposition (PID) [6] to estimate the information that the phase and amplitude components of simulated neuronal activity uniquely carry about the stimulus. Briefly, for two nodes indexed byjandk, we consider the product of their amplitude termsAjk, phase differencejk, and stimulusS. The three-variable PID allows us to decompose their total mutual informationI(S;Ajk,jk)into terms representing how they encode the stimulus redundantly or synergistically, as well as the unique information contained in the amplitude and phase interactions. Additionally, this framework could be extended to study non-dyadic interactions by operating at edge rather than node level [7]. In this case, PID decomposition is performed between two edge time series, each given byEjk=Ajkejk, allowing to either decompose the following mutual information terms:I(S;Ajk,Aml),I(S;Ajk,ml), andI(S;jk,ml)or performing multivariate PID.


Results


In the whole-brain model, we found that even though the stimulus is generally encoded by the signals’ amplitude, in areas that are hierarchically far apart, the initial encoding in amplitude is later found in the phase relation between the two areas in a weaker but more persistent form. These effects highly depend on the nodes’ dynamics and are most favorable when they exhibit transient oscillations (a < 0).



Discussion
Introducing a scaling of the natural oscillation frequency also appeared to enhance the effect, suggesting that different time scales across the cortex may promote the establishment of functional coupling through phase synchrony [8].



Acknowledgements
None
References

doi.org/10.1016/j.neuron.2015.09.034




10.1016/j.neuron.2023.03.015




10.1093/cercor/bhs270



10.1038/s41598-024-53105-0





10.1016/j.tics.2024.09.013Get rights and content




arxiv.org/abs/1004.2515





10.1038/s41593-020-00719-y



10.1073/pnas.1402773111


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P172: Scalable Computational Modeling of Neuron-Astrocyte Interactions in NEST
Monday July 7, 2025 16:20 - 18:20 CEST
P172 Scalable Computational Modeling of Neuron-Astrocyte Interactions in NEST

Marja-Leena Linne*1, Han-Jia Jiang2,3, Jugoslava Aćimović1, Tiina Manninen1, Iiro Ahokainen1, Jonas Stapmanns2,4, Mikko Lehtimäki1, Markus Diesmann2,4,5, Sacha J. van Albada2,3, Hans Ekkehard Plesser2,6,7
1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
2Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
3Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
4Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
5Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
6Department of Data Science, Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
7Käte Hamburger Kolleg: Cultures of Research (c:o/re), RWTH Aachen University, Aachen, Germany

*Email: marja-leena.linne@tuni.fi
Introduction

Astrocytes play a key role in modulating synaptic activity and network dynamics, yet large-scale models incorporating neuron-astrocyte interactions remain scarce [1]. This study introduces a novel NEST-based [2] simulation framework to model tripartite connectivity, where astrocytes interact with both presynaptic and postsynaptic neurons, extending traditional binary synaptic architectures. By integrating astrocytic calcium signaling and astrocyte-induced synaptic currents (SICs), the model enables dynamic modulation of neuronal activity, offering insights into the role of astrocytes in neural computation.
Methods
Our implementation integrates astrocytic calcium dynamics and SICs within a scalable, parameterized framework. The model allows controlled modulation of astrocytic influence, capturing transitions between asynchronous and synchronized neuronal states. Simulation scalability was assessed through strong and weak scaling benchmarks, leveraging parallel computing for network performance evaluation. Strong scaling benchmarks tested performance under fixed model size while increasing computing resources. Weak scaling benchmarks examined proportional upscaling of model size and computational power. These benchmarks evaluated network connection times, state propagation efficiency, and computational cost across different neuron-astrocyte configurations.
Results
Benchmark results show efficient parallel execution of the reference implementation [3]. Strong scaling benchmarks show that increasing computing resources reduces network connection and state propagation times. Weak scaling benchmarks reveal a moderate increase in communication time for processes like spike delivery and SIC delivery, yet overall performance remains robust against changes in model size and connectivity scheme. In this study, we validate the framework’s scalability to at least 1 million cells through benchmarking experiments, leveraging distributed computing for efficient simulation of large-scale neuron-glia networks.
Discussion
By providing a computationally accessible and reproducible tool for studying neuron-astrocyte interactions, this framework sets the stage for investigating glial contributions to synaptic modulation, network coordination, and their roles in neurological disorders. The integration of tripartite connectivity into NEST offers a versatile platform for modeling astrocytic regulation of neural circuits, advancing both fundamental neuroscience and applied computational modeling.



Acknowledgements
EU Horizon 2020 No. 945539 (Human Brain Project SGA3) to SJvA and M-LL. SGA3 Partnering Project (AstroNeuronNets) to JA and SJvA. EU Horizon Europe No. 101147319 (EBRAINS 2.0 Project) to SJvA and M-LL. HiRSE PS to SJvA. Research Council of Finland, Nos. 326494, 326495, 345280, and 355256, to TM, and 297893 and 318879 to M-LL. BMBF No. 01UK2104 (KHK c:o/re) to HEP.
References
[1]Manninen, T., Aćimović, J., Linne, M.-L. (2023). Analysis of Network Models with Neuron-Astrocyte Interactions. Neuroinformatics, 21(2), 375-406.https://doi.org/10.1007/s12021-023-09622-w
[2]Graber, S., Mitchell, J., Kurth, A.C., Terhorst, D., Skaar, J.E.W., Schöfmann, C.M., et al.(2024). NEST 3.8.https://zenodo.org/records/12624784

[3]Jiang, H.-J.,Aćimović, J., Manninen, T., Ahokainen, I., Stapmanns, J., Lehtimäki, M., et al.(2024). Modeling neuron-astrocyte interactions in neural networks using distributed simulation. bioRxiv.https://doi.org/10.1101/2024.11.11.622953


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P173: Large-scale Neural Network Model of the Human Cerebral Cortex Based on High-spatial-resolution Imaging Data
Monday July 7, 2025 16:20 - 18:20 CEST
P173 Large-scale Neural Network Model of the Human Cerebral Cortex Based on High-spatial-resolution Imaging Data

Chang Liu1, Dahui Wang*1, Yuxiu Shao*1

1School of Systems Science, Beijing Normal University, Beijing, China
*Email: wangdh@bnu.edu.cn (DW); shaoyx@bnu.edu.cn (YS)
Introduction

Large-scale brain models have received interest for their ability to probe complex dynamical phenomena. However, large-scale models guided by high-spatial-resolution imaging data remain largely underexplored. We develop a comprehensive large-scale model of the human cerebral cortex, utilizing the recently released data on receptor density[1] and white-matter connectivity[2]. Furthermore, we refine undirected white-matter connectivity into directed connectivity using tracer data[3]. Our model replicates the characteristic spatio-temporal patterns of whole-brain activities observed experimentally during the resting state, enabling a deeper exploration of the interplay between anatomical structure, dynamics, and potential functional roles.
Methods
Our network comprises about 60k vertices, each modeled as a microcircuit of coupled excitatory and inhibitory populations connected via AMPA, NMDA, and GABA synapses, exhibiting Wilson-Cowan type dynamics[4]. Intra-vertex connection strengths are proportional to receptor density[1]. Inter-vertex connections are derived from anatomical fiber data, obtained via dMRI tractography at vertex resolution[2], and averaged across 255 unrelated healthy individuals (Fig. 1A). Since this anatomical data is undirected, we redistribute fiber bundles between vertices using directed macaque neocortex tracer data[3].
Results
The simulation results demonstrate that the averaged firing rate (FR) across all vertices is around 3Hz[5]. Interconnected vertices show reduced correlation between FR and the excitatory-inhibitory receptor density ratio compared to independent vertices (Fig. 1B). Beta-band peak frequency exhibits a posterior-anterior gradient, which is disrupted by shuffling the spatial distribution of the AMPA-NMDA receptor ratio (Fig. 1C). The projection of power spectral density and FR onto the first principal component positively correlate with T1w/T2w (Fig. 1D, 1E). These findings align with experimental observations[6–8]. Moreover, asymmetric connectivity induces traveling waves, with sinks exhibiting higher FR than surrounding vertices (Fig. 1F).
Discussion
Our large-scale brain model with high-spatial-resolution not only introduces a novel approach to understanding the computational mechanisms of the brain but also offers critical insights into the neural dynamic mechanisms underlying cognitive dysfunction and mental disorders. However, our model still has some limitations: we directly assume synaptic strength is proportional to receptor density; estimate the directed-weighted connections based on coarse matching of macaque-human brain areas; and omit signal propagation delays. Future work will focus on simulating the information transmission across the cortex, exploring how this model can enhance our understanding of brain function and support the development of therapeutic strategies.



Figure 1. Fig 1: (A) Schematic. (B) Relationship between mean FR and E:I ratio. Blue: independent vertices. Red: interconnected vertices. (C) Dependency between the vertex’s location along the posterior-anterior axis. Blue: original. Pink: shuffled density ratio. (D-E) Correlation of model PSD PC1 maps (D) with T1w/T2w, and model FR PC1 maps (E). (F) Sinks displaying higher FR than surrounding vertices.
Acknowledgements
This work was supported by NSFC (No.32171094 to D.W., No.32400936 to Y.S.) and National Key R&D Program of China (2019YFA0709503 to D.W.) and International Brain Research Organization Early Career Award (to Y.S.).
References
1. https://doi.org/10.1038/s41593-022-01186-3
2. https://doi.org/10.1016/j.neuroimage.2020.117695
3. https://doi.org/10.1093/cercor/bhs270
4. https://doi.org/10.1523/JNEUROSCI.3733-05.2006
5. https://doi.org/10.1023/A:1011204814320
6. https://doi.org/10.7554/eLife.53715
7. https://doi.org/10.1016/j.neuron.2019.01.017
8. https://doi.org/10.1073/pnas.1608282113
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P174: Functional brain regions analysis using single-neuron morphology-driven reservoir network
Monday July 7, 2025 16:20 - 18:20 CEST
P174 Functional brain regions analysis using single-neuron morphology-driven reservoir network

Yuze Liu, Linus Manubens-Gil*, Hanchuan Peng*

Institute for Brain and Intelligence, Southeast University, Nanjing, China

* Email: linus.ma.gi@gmail.com

* Email: h@braintell.org
Introduction

The brain operates through network topology across brain regions and morphological diversity of neurons. Reservoir computing (RC), with its recurrent nonlinearly mapping, can accomplish temporal tasks [1], enabling network functional analysis. Previous work constructed reservoir using diffusion magnetic resonance imaging (dMRI)-derived connectivity matrices and proved randomness in weight signs improves network’s memory capacity (MC) [2]. However, limitations persist due to the macroscopic scale of connectome, leaving microscale neuronal contributions underexplored. Thus, we established a reservoir using mouse brain’s single-neuron full morphology tracings, analyzing its validity in exploring the variance of functional regions by MC task.

Methods
We used structural connectivity (SC) from [3]. The connectome data is 1,774 fully reconstructed mouse neurons registered to Allen Mouse Brain Common Coordinate Framework (CCFv3) [4]. We used hyperbolic tangent as nonlinear mapping. Input signal is sampled randomly, target signal is the delayed input. We fitted output to target signal via ridge regression and quantified performance by squared Pearson correlation coefficient. We constructed small-world networks with connection density approximating SC. We selected functional brain regions, e.g., LGd (Dorsal part of the lateral geniculate complex) and visual cortex regions as input/output nodes. We adjusted spectral radius to optimize connectivity weights for enhanced memory retention.
Results
We found that 1) based on uniform random connectivity weights, biologically wired networks with input-output nodes defined by functional regions slightly outperformed the Watts-Strogatz small-world network with random input-output node in MC task, confirming that single-neuron-derived network topology is relevant for the establishment of memories in RC; 2) we observed statistically significant differences in MC task performance for different thalamocortical integration of sensory modalities across diverse spectral radii ρ (76% of tested ρ values, 19/25, 0.1 ≤ ρ ≤ 5.0, Δρ = 0.2); independent t-test and Mann-Whitney U test, p<0.05;), suggesting morphological specificity of neuronal connections may underlie functional specialization.
Discussion
This study establishes a microscale framework linking single-neuron connectome to network functionality. Future work integrating generative models for scaling up the network, spiking neuronal dynamics, and modality-specific tasks could further dissect latent determinants of regional brain function.




Acknowledgements
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 32350410413 awarded to LMG.
References
1. https://doi.org/10.3389/fams.2024.1221051
2. https://doi.org/10.1109/IJCNN60899.2024.10650803
3. https://doi.org/10.1016/j.celrep.2024.113871
4. https://doi.org/10.1038/s41586-021-03941-1

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P175: A Neurorobotic Framework for Exploring Locomotor Control Following Recovery from Thoracic Spinal Cord Injury
Monday July 7, 2025 16:20 - 18:20 CEST
P175 A Neurorobotic Framework for Exploring Locomotor Control Following Recovery from Thoracic Spinal Cord Injury

Andrew B. Lockhart*1, Huangrui Chu1, Shravan Tata Ramalingasetty1, Natalia A. Shevtsova1, David S.K. Magnuson2, Simon M. Danner1


1Department of Neurobiology and Anatomy, College of Medicine, Drexel University, Philadelphia, PA, USA

2Department of Neurological Surgery, University of Louisville, Louisville, KY, USA

*Email: abl73@drexel.edu

Introduction


Thoracic spinal cord contusion disrupts communication between the cervical and lumbar circuitry. Despite this, rats recover locomotor function, though at a reduced speed and with altered speed-dependent gait expression. Our previous computational model of spinal locomotor circuitry [2,3] reproduced the observed gait changes by linking them to impaired long propriospinal connectivity and lumbar circuitry reorganization, likely involving enhanced reliance on afferent feedback. To investigate the role of sensory feedback in locomotion and explore post-contusion reorganization, a neurorobotic model of quadrupedal locomotion was used in which the spinal circuitry was embedded in a body that interacted with the environment (Fig. 1).

Methods
We have expanded our previous neural network model of spinal locomotor circuitry to drive a simulated Unitree Go1 quadrupedal robot. The model includes four rhythm generators, one per limb, interconnected by commissural and long propriospinal neurons. Activity from each rhythm generator controls a pattern formation network that coordinates muscle activation in each limb. Hill-type muscles convert this activation into torque to actuate the motors and allow for calculation of proprioceptive feedback, which interacts with all levels of the spinal circuitry. Connection weights of proprioceptive, vestibular, and pattern forming neurons were optimized using covariance matrix adaptation evolution strategy to produce adaptive locomotion.
Results
The optimized model produces stable locomotion across a range of target speeds. Integration of muscle states and environmental information through proprioceptive, cutaneous, and vestibular neurons allows the model to traverse rough terrain consisting of variable slopes and ground friction. Preliminary simulation of thoracic contusion by reducing connection weights of inter-enlargement long propriospinal neurons results in altered gaits.

Discussion
The model provides a testbed for linking neuronal manipulations to changes in locomotion and behavior. By comparing locomotor gaits across models—those undergoing a second round of optimization post-contusion, those that have not, and experimental results from rats—we can identify and analyze critical neuronal connections involved in recovery. Using this approach, we will further investigate how circuit reorganization can contribute to locomotor recovery after thoracic spinal cord contusion.





Figure 1. Fig 1. A) The central locomotor circuit model for four limbs includes long propriospinal neurons connecting cervical and lumbar circuits adapted from Frigon 2017 [3]. B) Two-level rhythm and pattern formation circuitry for one limb. Motoneuron (MN) activity activates muscles (C) which actuate torque-controlled motors (D). Kinematics and kinetics are transformed into afferent feedback signals.
Acknowledgements
This work was supported by the National Institutes of Health (NIH) grantsR01NS112304, R01NS115900, andT32NS121768.
References
[1] Danner, S. M., et al. (2017). Computational modeling of spinal circuits controlling limb coordination and gaits in quadrupeds.eLife,6, e31050. https://doi.org/10.7554/eLife.31050
[2] Zhang, H., et al. (2022). The role of V3 neurons in speed-dependent interlimb coordination during locomotion in mice.eLife,11, e73424. https://doi.org/10.7554/eLife.73424
[3] Frigon, A. (2017). The neural control of interlimb coordination during mammalian locomotion.Journal of Neurophysiology,117(6), 2224–2241. https://doi.org/10.1152/jn.00978.2016
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P176: Plastic Arbor: a modern simulation framework for synaptic plasticity – from single synapses to networks of morphological neurons
Monday July 7, 2025 16:20 - 18:20 CEST
P176 Plastic Arbor: a modern simulation framework for synaptic plasticity – from single synapses to networks of morphological neurons

Jannik Luboeinski*1,2,3, Sebastian Schmitt1,2, Shirin Shafiee1,2, Thorsten Hater4, Fabian Bösch5, Christian Tetzlaff1,2,3

1III. Institute of Physics – Biophysics, University of Göttingen, Germany
2Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Germany
3Campus Institute Data Science (CIDAS), Göttingen, Germany
4Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany
5Swiss National Supercomputing Centre, ETH Zürich, Switzerland

*Email: jannik.luboeinski@med.uni-goettingen.de

Introduction
Arbor is a software library designed for the efficient simulation of large-scale networks of biological neurons with detailed morphological structures. It combines customizable neuronal and synaptic mechanisms with high-performance computing, enabling to use diverse backend architectures such as multi-core CPU and GPU systems [1] (also see Fig. 1a).
Synaptic plasticity processes play a vital role in cognitive functions, including learning and memory [2,3]. Recent studies have shown that intracellular molecular processes in dendrites significantly influence single-neuron dynamics [4,5]. However, for understanding how the complex interplay between dendrites and synaptic processes influences network dynamics, computational modeling is required.
Methods
To enable the modeling of large-scale networks of morphologically detailed neurons with diverse plasticity processes, we have extended the Arbor library to yield the Plastic Arbor framework, supporting simulations of a large variety of spike-driven plasticity paradigms (cf. Fig. 1b). To showcase the features of the new framework, we present examples of computational models, beginning with single-synapse dynamics [6,7], progressing to multi-synapse rules [8,9], and finally scaling up to large recurrent networks [10].
Results
While cross-validating our implementations by comparison with other simulators, we show that Arbor allows simulating plastic networks of multi-compartment neurons at nearly no additional cost in runtime compared to point-neuron simulations. Using the new framework, we have already been able to investigate the impact of dendritic structures on network dynamics across a timescale of several hours, showing a relation between the length of dendritic trees and the ability of the network to efficiently store information.
Discussion
Due to its modern computing architecture and inherent support of multi-compartment neurons, the Arbor simulator constitutes an important tool for the computational modeling of neuronal networks. By our extension of Arbor, we provide a valuable tool that will support future studies on the impact of synaptic plasticity, especially, in conjunction with neuronal morphology, in large networks. In our recent work, we also demonstrate new insights into the functional impact of morphological neuronal structure at the network level. In the future, the Plastic Arbor framework may power a great variety of studies considering synaptic mechanisms and their interactions with neuronal dynamics and morphologies, from single synapses to large networks.
Figure 1. Overview of the extended Arbor framework with support for synaptic plasticity simulations.
Acknowledgements
This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) through grants SFB1286 (C01, Z01) and TE 1172/7-1, as well as by the European Commission H2020 grants no. 899265 (ADOPD) and 945539 (HBP SGA3).
References1. https://doi.org/10.1109/EMPDP.2019.8671560
2. https://doi.org/10.1146/annurev.neuro.23.1.649
3. https://doi.org/10.1038/s41539-019-0048-y
4. https://doi.org/10.1016/j.conb.2008.08.013
5. https://doi.org/10.7554/eLife.46966
6. https://doi.org/10.1038/78829
7. https://doi.org/10.1073/pnas.1109359109
8. https://doi.org/10.1523/JNEUROSCI.0027-17.2017
9. https://doi.org/10.1371/journal.pone.0161679
10. https://doi.org/10.1038/s42003-021-01778-y
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P177: Cobrawap: from a specific use-case to a more general scientifically-technologically co-designed tool for neuroscience
Monday July 7, 2025 16:20 - 18:20 CEST
P177 Cobrawap: from a specific use-case to a more general scientifically-technologically co-designed tool for neuroscience

Cosimo Lupo1,*, Robin Gutzen2, Federico Marmoreo1, Alessandra Cardinale1,3, Michael Denker4, Pier Stanislao Paolucci1, Giulia De Bonis1

[1] Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
[2] Dept. of Psychology and Center for Data Science, New York University, New York, USA
[3] Università Campus Bio-Medico di Roma, Rome, Italy
[4] Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany

*Email: cosimo.lupo89@gmail.com

Introduction
Cobrawap (Collaborative Brain Wave Analysis Pipeline) [1-3] is an open-source, modular and customizable data analysis tool designed and implemented by INFN (Italy) and Jülich Research Centre (Germany) in the context of the Human Brain Project, further enhanced within the EBRAINS and EBRAINS-Italy initiatives. Its foundational goal was to enable standardized quantitative descriptions of cortical wave dynamics observed in heterogeneous data sources, both experimental and simulated, also allowing for validation and calibration of brain simulation models (Fig. 1). The current directions of development aim at enhancing generalizability beyond the set of originally considered use cases.

Methods
Intercepting the increasing demand by the Neuroscience community for reusability and reproducibility, Cobrawap provides a framework suitable for collecting generalized implementations of established methods and algorithms. Inspired by FAIR principles and leveraging the latest software solutions, Cobrawap is structured as a collection of modular Python3 building blocks that can be flexibly arranged along sequential stages, implementing data processing steps and analysis methods, directed by workflow managers (Snakemake or CWL). The collaborative approach behind the whole software allows users to seamlessly enrich its scope, by co-designing and implementing new processing or visualization blocks with the support of the Cobrawap “core team”.

Results
Cobrawap has been successfully appliedon murine data and data-driven simulations, for multi-scale quantitative comparisons of heterogeneous experimental datasets [4] and for validation and calibration of simulation models [5], in the specific use-case of cortical slow wave data analysis in low-consciousness brain states. Later applications on non-human primate experimental data, and on increasing levels of consciousness, have proven the robustness and the versatility of the approach, paving the way to the crucial extension toward human data. A fundamental step is represented by the comparison with simulations, e.g. via TheVirtualBrain [6,7], which allow both to benchmark the new algorithms, and to validate and calibrate such models [8,9].

Discussion
Cobrawap has proven to be effective in the analysis of both synthetic and experimental data of different origin, representing a FAIR-compliant collaborative framework for the scientific and technological co-design. Together with the appealing extension to experimental human data, in both physiological and pathological conditions, further lines of enhancement involve the analysis of the output from a variety of theoretical models, also including the outcomes of artificial neural networks; this makes it eligible for addressing the explainability of AI solutions in bio-inspired systems that incorporate the emulation of brain states as a key element for the implementation of efficient incremental learning and cognition [10,11].

Figure 1. Cobrawap offers standardized quantitative descriptions of brain wave dynamics observed in heterogeneous data sources, both experimental and simulated (top right panel), via a set of sequential stages featuring modular and flexible sets of processing and visualization blocks (bottom panel, for two different recording techniques on anesthetized mice), each easily customizable by the user.

Acknowledgements
Research co-funded by: European Union’s Horizon Europe Programme under Specific Grant Agreement No. 101147319 (EBRAINS 2.0); European Commission NextGeneration EU through Italian Grant MUR-CUP-B51E22000150006 EBRAINS-Italy PNRR.

References
[1] github.com/NeuralEnsemble/cobrawap
[2] cobrawap.readthedocs.io
[3] doi.org/10.5281/zenodo.10198748
[4] Gutzen, et al. (2024)doi.org/10.1016/j.crmeth.2023.100681
[5] Capone, De Luca, et al. (2023)doi.org/10.1038/s42003-023-04580-0
[6] Sanz Leon, et al. (2013)doi.org/10.3389/fninf.2013.00010
[7] www.thevirtualbrain.org
[8] Gaglioti, et al. (2024)doi.org/10.3390/app14020890
[9] Cardinale, Gaglioti, et al. (2025)in preparation
[10] Capone, et al. (2019)doi.org/10.1038/s41598-019-45525-0
[11] Golosio, De Luca, et al. (2021)doi.org/10.1371/journal.pcbi.1009045
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P178: Coupling brain network simulation with pharmacokinetics for Parkinson's disease: towards patient-usable digital twins
Monday July 7, 2025 16:20 - 18:20 CEST
P178 Coupling brain network simulation with pharmacokinetics for Parkinson's disease: towards patient-usable digital twins

William Lytton*127, Donald Doherty17, Adam Newton17, June Jung1, Samuel Neymotin13, Salvador Dura Bernal1, Thomas Wichmann57, Adriana Galvan57, Hong-Yuan Chu47, Yoland Smith57,Husan Abdurakhimov6, Jona Ekström6, Henrik Podéus Derelöv16, Elin Nyman6, Gunnar Cedersund6
1 Downstate Health Science University, Brooklyn NY USA
2 Kings County Hospital, Brooklyn NY USA
3 Nathan Kline Institute, Orangeburg NY USA
4 Georgetown University, Washington DC USA
5 Emory University, Atlanta GA USA
6 Linköping University, Linköping, Sweden
7 Aligning Science Across Parkinson's (ASAP) Collaborative Research Network, Chevy Chase, United States*billl@neurosim.downstate.edu

Introduction
Parkinson’s disease (PD) is characterized by complex motor deficits in multiple sites. Starting with dopamine (DA) depletion in substantia nigra, brain dysfunction subsequently occurs in primary motor cortex (M1), basal ganglia (BG) and other areas. At first, dysfunction is a direct consequence of reduced DA. Then, through the dynamics of compensation and decompensation, these other areas become themselves pathophysiological. We used simulation to explore the focal M1 pathophysiology seen in mouse models. We are now integrating pharmacokinetic (PK) models to consider how therapy (Rx) can normalize dynamics.
Methods
We adapted our NEURON/NetPyNE M1 neuronal network (NN) model to simulate PD, reducing pyramidal-tract layer 5 neuron (PT5B) excitability. We coupled a prior ODE PK model to evaluate DA, NE levels produced by L-DOPA, L-DOPS Rx, respectively, modulating parameters based on local DA,NE levels. Parameter optimizations explored PK outputs into network activity to look at 1. dose-timing; 2. gut absorption (bioavailability, gastric delays); 3. multi-compartment distribution (blood, fat, muscle, brain); 4. blood-brain barrier (BBB) crossing; 5. precursor conversion to DA, NE; 6. drug metabolism and clearance.
Results
We focused on NE since locus coeruleus (LC) degeneration directly affects M1 cells, while DA loss directly affects BG neurons. Our untreated network simulations showed elevated PT5B activity despite reduced PT5B excitability. This paradoxical firing rate increase was associated with enhanced LFP beta-band oscillatory power with beta bursts. NE Rx shifted network activity to 20-35 Hz high-beta activity with reduction in excessive beta power, partly normalizing activity.
Discussion
Our hybrid PK-NN model demonstrated related potential clinical Rx of PD with correction of the pathophysiological changes that produce motor dysfunction, thus starting to link treatment with well-being. Partial normalization of beta oscillations and firing rates with L-DOPS treatment may add to treatment outcomes. We can isolate clinically-modulatable effects including dose-timing, gut pretreatment, precursor transformation, and clearance to shape target-neuron effects. We hope to thereby improve effect/side-effect balance to reduce dyskinesias, wearing-off, and freezing. Future model iterations will extend to digital twin applications to a provide tools to assist patients in personally optimizing their own therapy.






Acknowledgements
This research was funded in part by Aligning Science Across Parkinson’s [ASAP-020572] through the Michael J. Fox Foundation for Parkinson’s Research (MJFF). For the purpose of open access, the author has applied a CC BY public copyright license to all Author Accepted Manuscripts arising from this submission.
Supported in part by STRATIF-AI funded by Horizon Europe agreement 101080875.
References
none
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P179: Introducing the Phase-Relationship Index (PRI): Transmission Delay Shapes In- and Anti-Phase Functional Connectivity in EEG Analysis and Simulation
Monday July 7, 2025 16:20 - 18:20 CEST
P179 Introducing the Phase-Relationship Index (PRI): Transmission Delay Shapes In- and Anti-Phase Functional Connectivity in EEG Analysis and Simulation

William W Lytton*1, 2,3, Andrei Dragomir4, Ahmet Omurtag5


1Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, New York
2Department of Neurology, SUNY Downstate Health Sciences University, Brooklyn, New York
3Department of Neurology, Kings County Hospital Center, Brooklyn, New York

4 Singapore Institute for Neurotechnology, National University of Singapore, Singapore

5Engineering Department, Nottingham Trent University, Nottingham, United Kingdom

*Email: billl@neurosim.downstate.edu

Introduction
Neural oscillations enable information processing via cortical network synchronization, yet EEG studies rarely examine precise phase relationships. Introducing the Phase-Relationship Index (PRI), we demonstrate in-phase clustering dominates at cortical distances <80 mm, shifting to anti-phase beyond this. Simulations of delay-coupled excitatory LeakyIntegrate-and-Fire (LIF)neurons reveal conduction delays as the mechanism underlying this distance-dependent EEG phase relationship pattern.


Methods
Analyzing 19-channel resting EEG from 31 healthy subjects [1], we computed inter-site phase clustering (ISPC/PLV) across 1–32 Hz frequencies for electrode pairs. Phase differences determined ISPC with PRI addressing the phase relationship. Cortical distances derived from MNI coordinates [2] were used for distance-dependent analyses. Simulations modeled two delay-coupled excitatory LIF neuron populations (N=200 each) with recurrent (gain G) and inter-population (gain g) connections, conduction delays (d and tau). Firing rates (analogous to EEG) underwent spectral analysis (Frequency Band Power), synchrony assessment (Order Parameter), and ISPC/PRI comparisons between simulated and empirical data.


Results
Analysis revealed PRI values predominantly near 0 or 1. For 16 Hz connections, cortical distance increased with PRI (Fig. 1A), transitioning sharply from in-phase (PRI≈0) to anti-phase (PRI≈1, mainly asymmetric long-range) at 85-120 mm (Fig. 1B-F). Simulations of LIF neuronal populations identified four dynamic regimes (Fig. 1H-K). Disconnected populations (g=0) showed irregular firing (Fig. 1H), transitioning to synchronous but un-clustered activity with increased intra-population connectivityG(Fig. 1I,L). Introducing inter-population connections (g>0) induced phase clustering (rising ISPC, Fig. 1M), switching from in-phase (small tau, Fig. 1J) to anti-phase at tau≈31 ms (Fig. 1K,N), accompanied by reduced synchrony (Fig. 1N) during the transition to anti-phase.


Discussion
Our findings link distance dependence of clustering (Fig. 1A) to delay-coupled neuronal population dynamics (Fig. 1N). Sparse inter-population connections which were sufficient to induce clustering (Fig. 1M) mirror sparse long-distance neuroanatomical connectivity [3], and derived conduction speeds (5.44-8 m/s) match myelinated axons [4]. We show, challenging prior assumptions, that zero-lag synchrony is genuine, not artefactual. PRI analysis also reveals anti-phase dominance (Fig. 1A; [5]), distinct topographies (Fig. 1E-F), and task-modulated dynamics, underscoring its biomarker potential.







Figure 1. Figure 1. Phase clustering in EEG (A-F) and simulated neurons (G-N). (A) ISPC, PRI, cortical distances. (B) z values on Argand plane for 3 electrode pairs. (C) angle(z) values. (D) PRI values. (E) Top 15 in-phase connections. (F) Top 15 anti-phase connections. (G) Schematic of populations. (H-K) Firing rate time series. (L) ISPC, FBP, Order Parameter vs G. (M) ISPC, FBP vs g. (N) ISPC, PRI vs tau.
Acknowledgements
None.
References
[1]https://doi.org/10.1038/s41598-020-69553-3
[2]https://doi.org/10.1002/brb3.2476
[3]https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001575
[4]https://doi.org/10.1371/journal.pcbi.1007004
[5]https://doi.org/10.1002/jnr.24748
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P180: DendroTweaks: An interactive approach for unraveling dendritic dynamics
Monday July 7, 2025 16:20 - 18:20 CEST
P180 DendroTweaks: An interactive approach for unraveling dendritic dynamics

Roman Makarov*1,2, Spyridon Chavlis1, Panayiota Poirazi1
1Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Greece
2Department of Biology, University of Crete, Heraklion, Greece
*Email: roman_makarov@imbb.forth.gr
Introduction

Neurons rely on the interplay between dendritic morphology and ion channels to transform synaptic inputs into somatic spikes. Detailed biophysical models with active dendrites have been instrumental in exploring this interaction but challenging to understand and validate due to numerous free parameters. We introduceDendroTweaks, a comprehensive toolbox for creating and validating single-cell neuronal models with active dendrites, bridging computational implementation with conceptual understanding.

Methods
DendroTweaksis implemented in Python and provides a high-level interface to NEURON [1] with extended functionality for single-cell modeling and data processing. The core components include: (1) algorithms for representing and refining neuronal morphologies; (2) a NMODL-to-Python converter, along with a framework for standardizing ion channel models through parameter fitting based on equations from [2]; (3) an extended implementation of the impedance-based morphology reduction approach [3] enabling continuous reduction levels; and (4) automated validation protocols for testing somatic and dendritic activity.
Results
The toolbox provides researchers with capabilities to: (1) clean and manipulate SWC morphology files; (2) convert MOD files to Python and standardize kinetics of voltage-gated ion channel models; (3) interactively distribute membrane parameters and synapses across neuronal compartments; (4) reduce detailed morphological models to simplified versions while preserving key electrophysiological properties; and (5) record activity from multiple somatic and dendritic locations to validate neuronal responses to external stimuli. The GUI provides interactive widgets and plots for parameter adjustment with real-time visual feedback (Fig. 1).
Discussion
DendroTweaksaddresses critical challenges in computational neuroscience through data cleaning and model standardization. Its interactive interface enables intuitive exploration of models, illuminating how morpho-electric properties shape dendritic computations and neuronal output. Future work will focus on multi-platform integration with other simulators to further enhance the standardization and accessibility of detailed biophysical models.




Figure 1. Figure 1. A screenshot of the web-based GUI accessed through the Chrome browser. The interface consists of a main workspace and side menus with widgets. The workspace displays interactive plots showing neural morphology, ion channel distributions and kinetics, and simulated activity.
Acknowledgements
Funded by the Horizon 2020 programme of the European Union under grant agreement No 860949. The research project was co-funded by the Stavros Niarchos Foundation (SNF) and the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the 5th Call of “Science and Society” Action Always strive for excellence – Theodoros Papazoglou” (Project Number: DENDROLEAP 28056).
References
1. Hines, M., Davison, A. P., & Muller, E. (2009). NEURON and Python. Frontiers in neuroinformatics, 3, 391. https://doi.org/10.3389/neuro.11.001.2009
2. Sterratt, D., Graham, B., Gillies, A., Einevoll, G., & Willshaw, D. (2023). Principles of computational modelling in neuroscience. Cambridge university press.
3. Amsalem, O., Eyal, G., Rogozinski, N., Gevaert, M., Kumbhar, P., Schürmann, F., & Segev, I. (2020). An efficient analytical reduction of detailed nonlinear neuron models. Nature communications, 11(1), 288. https://doi.org/10.1038/s41467-019-13932-6
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P181: The processing of auditory rhythms in the thalamocortical network throughout the development
Monday July 7, 2025 16:20 - 18:20 CEST
P181 The processing of auditory rhythms in the thalamocortical network throughout the development

Sepideh Sadat Malekjafarian*1, Maryam Ghorbani1,2, Sahar Moghimi3,Fabrice Wallois3,4


1 Electrical Engineering Department, Ferdowsi University of Mashhad, Iran
2Rayan Center for Neuroscience and Behavior, Ferdowsi University of Mashhad, Iran
3Inserm UMR1105, Groupe de Recherches sur l’Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens Cedex 80036, France
4Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens Cedex 80054, France


*Email:s.malekjafarian92@gmail.com


Introduction
In early neural development, thalamocortical network exhibits unique characteristics, especially in preterm infants whose brain is not yet fully developed. These include specific patterns of neural oscillations, which is crucial for development of cortical circuitry and formation of neural networks. Evidence suggests thatthe ability to perceive rhythm and synchronize with periodic patterns play a critical role in neurodevelopment, particularly in language, musicand social interaction. Here, we first developed a computational model of thalamocortical neural network which is capable of generating brain rhythms associated with preterm infants. Using this model, we then investigate the early development of neural response to external rhythm.

Methods
The model consists of (i) two recurrent excitatory-inhibitory neuron groups representingcortex-subplate network with adaptationand (ii) one excitatory-inhibitory group with burst representingthalamus. All parameters remain constant exceptI-E synaptic strength in cortex-subplate networkand thalamocortical connections to generate brain rhythms. We depict neurodevelopmental trajectories in our model using EEG recordings in 46 neonates (27-35 wGA) during rest and stimulation with specific auditory stimulus [1]. The same stimulus was applied to validate auditory processing.Asynchronization indexassesses the network’s alignment with stimulus oscillations.Developmental trajectories are compared between model and premature EEG recordings.


Results
Based on free parameters,the model was developed to achieve the best age matching with the age of EEGrecordings[1]. Additionally, we were able to extract two key features of premature signals, slope and burst-interburst intervals,at different ages from the model, which are consistent with experimental results.Exploiting the developmental regime that best fitted the evolution of the spontaneous neural activity,we then show how the nonlinear interaction of auditorystimuli with endogenous brain rhythms of the model can result in different responses at different ages. Our computational model can explain the mechanism underlying the process of auditory rhythms as neural synchronization to beat and meter frequencies strengthens with age.
Discussion
Our model with its free parameters can explain the age-related changes in neural response and the increasing ability of infants to process rhythms with increasing gestational age at birth previously observed in electrophysiological data. Utilizing E-I synaptic strengths and thalamocortical connections, the model can generate preterm spontaneous brain oscillations and effectively describe the neural response to auditory stimuli with different frequencies.This enables the model to explain the observation that neural synchronization to faster rhythm is present at all ages, while neural synchronization to slower, metric rhythms emerges only athigher ages.



Acknowledgements
No specific acknowledgments are applicable for this study
References
● Saadatmehr, B., Edalati, M., Wallois, F., Ghostine, G., Kongolo, G., Flaten, E., Tillmann, B., Trainor, L., & Moghimi, S., (2025). Auditory Rhythm Encoding during the last trimester of human gestation: from tracking the basic beat to tracking hierarchical nested temporal structures. The Journal of Neuroscience, 45(4), 1-10. https://doi.org/10.1523/JNEUROSCI.0398-24.2024


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P182: Computational aspects of microarousals during awakening from anesthesia
Monday July 7, 2025 16:20 - 18:20 CEST
P182 Computational aspects of microarousals during awakening from anesthesia

Arnau Manasanch*1,2, Leonardo Dalla Porta1, Melody Torao-Angosto1, Maria V. Sanchez-Vives1,3
1Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
2Facultat de Medicina I Ciències de la Salut, Universitat de Barcelona, 08036 Barcelona, Spain
3ICREA, Passeig Lluís Companys 23, 08010 Barcelona, Spain

*Email:manasanch@clinic.cat

Introduction
The study of brain states is fundamental to understanding consciousness and its neural mechanisms [1,2]. Both sleep and anesthesia provide valuable models for investigating and characterizing brain states and their transitions [3,4,5]. While extensive research has characterized Microarousals (MAs), brief wake-like periods of brain activity, in sleep [6], these remain almost unexplored during anesthesia. Emerging evidence suggests that these transient events may be modulated by an infraslow rhythm [7,8,9], influencing arousal dynamics during emergence from anesthesia. Here, we investigate the dynamics of MAs during anesthetic emergence using local field potential (LFP) recordings from anesthetized rats, shedding light on infraslow modulation of transient arousals.



Methods

To obtain long-term LFP recordings in freely moving Lister-Hooded rats (6–10 months old), electrodes were chronically implanted 600 µm deep in the cortex. EMG was recorded from the neck muscle. After post-surgical care, animals underwent five days of handling before recordings. LFPs were recorded during anesthesia induction and emergence. The protocol was the same used in [10]. Briefly,each subject received a single shot of intraperitoneal anesthesia consisting of ketamine (20-40 mg/kg) and medetomidine (0.15-0.3 mg/kg).Cortical activity was monitored from wakefulness to full emergence from anesthesia. Experiments followed Spanish and EU regulations and were approved by the Ethics Committee of the Universitat de Barcelona (287/17 P3).
Results
After remaining in the slow oscillatory state, characterized by alternating Up (high firing) and Down (silent) periods, for 2–3 hours, the brain dynamics abruptly transitioned to a state dominated by fast oscillations (~6 Hz) and wake-like microarousals. As anesthesia wore off, MAs progressively increased in duration. This transition appeared to be modulated by an infraslow oscillation during a steady state period (~0.14 Hz), which gradually slowed (reaching ~0.04 Hz) in the progression towards wakefulness. Analysis of MAs across subjects reveals a consistent trend of increasing duration of the microarousals over time, with power-law distributions observed in the duration of MAs. These distributions show an average exponent of 2.33±0.36, suggesting that microarousals exhibit characteristic scaling behavior across different subjects.


Discussion
Our findings suggest that the increasing duration of microarousals (MAs) as anesthesia progresses reflect a gradual transition toward wakefulness, with dynamics that share properties with sleep awakening.The power-law behavior in MA duration indicates a scale-invariant process, a hallmark of self-organized criticality. This model provides a new understanding of the microarchitecture of anesthesia, offering a window into controlled microarousals and the network dynamics from unconsciousness to consciousness.





Acknowledgements
The EU Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3); INFRASLOW PID2023-152918OB-I00 funded by MICIU / AEI / 10.13039/501100011033/FEDER. Co-funded by Departament de Recerca i Universitats de la Generalitat de Catalunya (AGAUR 2021-SGR-01165). IDIBAPS is funded by the CERCA program (Generalitat de Catalunya).
References
[1]https://doi.org/10.1038/nrn3084
[2]https://doi.org/10.1016/j.tins.2023.04.001
[3]https://doi.org/10.1126/science.8235588

[4]https://doi.org/10.1213/ANE.0000000000005361
[5]https://doi.org/10.1016/j.conb.2017.04.011
[6]https://doi.org/10.1016/s0987-7053(99)80016-1
[7]https://doi.org/10.1016/j.celrep.2021.109270
[8]https://doi.org/10.1016/j.neuron.2024.12.009
[9]https://doi.org/10.1038/s41593-024-01822-0
[10]https://doi.org/10.3389/fnsys.2021.609645
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P183: Biophysically detailed neuron models with genetically defined ion channels
Monday July 7, 2025 16:20 - 18:20 CEST
P183 Biophysically detailed neuron models with genetically defined ion channels

DarshanMandge*1,2, Rajnish Ranjan2, Emmanuelle Logette2, Tanguy Damart2, Aurélien Tristan Jaquier2, Lida Kanari2, Daniel Keller2, Yann Roussel2, Stijn van Dorp2, Werner Van Geit1, and Henry Markram2
1Open Brain Institute,1005,Lausanne,Switzerland
2Blue Brain Project, Écolepolytechniquefédéralede Lausanne (EPFL), Campus Biotech, 1202 Geneva, Switzerland
*Email: darshan.mandge@openbraininsititute.org
Introduction

Cortical neurons can be classified into different electrical firing types (e-types). A common approachtomodellingthesee-types involvesthecreationofdetailed electrical models (e-models) using generic ion channel currents such as transient and persistent sodium, potassium channels, and high- and low-voltage activated calcium channels[1]. While this approach accurately captures a neuron's electricalbehaviour, it does notestablisha link between specificion channels andobservedelectrophysiological properties.
Methods
We now havemade47 homomeric ion channel modelscorresponding to various potassium[2], sodium, calcium, and hyperpolarization-activated cyclic nucleotide-gated (HCN) ion channels. These genetic ion channel models were based on independent experimental data from the heterologous expression of the corresponding genes. The genetic channels along with some generic ion channelswereused in this study to construct cortical e-type modelsfromdetailed morphological reconstructions and electrophysiological datacollected inthe rat somatosensory cortex. Webuilt aPython-basedpipeline calledBluePyEModel[3] tobuild such e-models.
Results
The optimized e-modelsreproduce firing propertiesobservedinin vitrorecordings. Electrical features of the optimized e-models were found to be within 3–5 standard deviations of the corresponding mean experimental recordings.
Discussion
These biophysically detailed modelsenablea better understanding of the electrical activity in normal and pathological states of neurons.Inthefuture, wewillmake these e-models available on the Open Brain Institute (OBI)platform,https://openbraininstitute.org/. The OBIplatformprovidesa comprehensive repository of digital brain models andstandardisedcomputational modelling services to enableusers to conduct realistic brain simulations, test hypotheses, and explore the complexitiesatvarious modelling levels– Subcellular, Cellular, Circuit andSystems.



Acknowledgements
This study was supported by funding to the Blue Brain Project, a researchcenterof the Écolepolytechniquefédéralede Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.


References
1.https://doi.org/10.1016/j.patter.2023.100855
2.https://doi.org/10/ghqvg8
3.https://doi.org/10.5281/zenodo.8283490


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P184: Spontaneous oscillations and neural avalanches are linked to whisker stimulation response in the rat-barrel and thalamus circuit
Monday July 7, 2025 16:20 - 18:20 CEST
P184 Spontaneous oscillations and neural avalanches are linked to whisker stimulation response in the rat-barrel and thalamus circuit

Benedetta Mariani*1, Ramon Guevara Erra1,2, Mattia Tambaro3,4, Marta Maschietto3, Alessandro Leparulo3,5, Stefano Vassanelli3, Samir Suweis1,2
1Padova Neuroscience Center, University of Padova, Padova, Italy
2Department of Physics and Astronomy, University of Padova, Padova, Italy
3Department of Biomedical Sciences, University of Padova, Padova, Italy
4Department of Physics, University of Milano Bicocca, Milan, Italy
5Department of Neuroscience, University of Padova, Padova, Italy

*Email: benedetta.mariani@unipd.it

Introduction
The cerebral cortex operates in a state of restless activity, even in the absence of external stimuli [1,2]. Collective neuronal activities, such as neural avalanches[3] and collective oscillations[4], are also found under resting conditions, and these features have been suggested to support sensory processing and brain readiness for rapid responses [2]. However, most of these results are supported by theoretical models rather than experimental observations. The rat barrel cortex and thalamus circuit, with its somatotopic organization for processing whisker movements, provides a powerful system to explore the interplay between spontaneous and evoked activities.
Methods
To characterize the resting state circuits, we perform multi-electrode recordings in both rats' barrel cortex and thalamus through a neural probe, both during spontaneous activity and activity after controlled whisker stimulation. We decompose the LFP signals into their frequency contents through Empirical Mode Decomposition, a tool that is suited to analyze non-linear and non-stationary oscillations. We also analyze avalanches distributions by detecting events in MUAs activity and grouping them by temporal proximity. We then employ a mesoscopic firing rate model, fitted on real data [5], to understand the observed phenomomenology. It receives as input the experimental thalamic firing rate.

Results
During spontaneous activity, we find 10-15 Hz oscillations in the barrel cortex concomitantly with slow 1-4 Hz oscillations, as well as power-law distributed avalanches. The slow oscillations are also present in the thalamus, while the 10-15 Hz one is lacking. We find that the phase of the slow oscillation modulates the higher frequency amplitude, as well as avalanche occurrences. We then record neural activity during controlled whisker movements to confirm that the 10-15 Hz barrel circuit is amplified after whisker stimulation.We finally show how the thalamic-driven firing rate model can describe the entire phenomenology observed and predict the response to whisker stimulations.
Discussion

Our results show that even during spontaneous activity the rat barrel cortex displays a rich dynamical state that includes avalanches and oscillations, which are coupled through the slow oscillation. The 10-15 Hz oscillation is amplified after the whisker stimulation, suggesting that spontaneous neural activity primes the rat cortex for the whisker response. These facts are confirmed by our model, that is able to reproduce the resting state phenomenology and the amplification of oscillations after stimulation, thanks to the thalamic input to the cortex. Moreover, the barrel cortex oscillatory behavior may allow a flexible synchronization mechanism for the perception of stimuli.





Acknowledgements
Work by B.M and S.S. is supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) –(DN. 1553 11.10.2022). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References
[1] Raichle M. E. (2011),https://doi.org/10.1089/brain.2011.0019
[2]Smith, S. M., et al (2009),https://doi.org/10.1073/pnas.0905267106
[3]Beggs, J. M., & Plenz, D. (2003),https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003
[4] Singer W. (2018),https://doi.org/10.1111/ejn.13796
[5]Pinto, D. et al (1996),https://doi.org/10.1007/BF00161134




Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P185: Fitting the data and describing neural computation with interaural time differences in the human medial superior olive
Monday July 7, 2025 16:20 - 18:20 CEST
P185 Fitting the data and describing neural computation with interaural time differences in the human medial superior olive

Petr Marsalek *1, Pavel Sanda 2, Zbynek Bures 3,

1 Institute of Pathological Physiology, First Medical Faculty, Charles University in Prague, Czech Republic
2 Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic
3 College of Polytechnics, Tolsteho 16/1556, 586 01, Jihlava, Czech Republic

*Email: petr.marsalek@lf1.cuni.cz


Introduction

In the auditory nerve and the following auditory pathway, incoming sound is encoded into spike trains - series of neural action potentials. At the third neuron of the auditory pathway, spike trains of the left and right sides converge and are processed to yield sound localization information. Two different localization encoding mechanisms are employed for low and high sound frequencies in two dedicated nuclei in the brainstem: the medial and lateral superior olivary nuclei.


Methods
The model neural circuit is based on connected phenomenological neurons. Spikes in these neurons are point events, only spike times matter. The model employs concepts of the just noticeable difference read out by the neural circuit and an ideal observer with access to all the information.


Results
Building upon our previous computational model of medial superior olive (MSO), we bring analytical estimates of parameters needed to describe auditory coding in the MSO circuit. We arrive to best estimates for neuronal signaling with the use of just noticeable difference and the ideal observer concepts. We describe spike timing jitter and its role in the spike train processing. We study the dependence of sound localization precision on the sound frequency. All parameters are accompanied with detailed estimates of their values and variability.
Discussion
Intervals bounding all the parameters from lower and higher values are discussed. Most of the results are obtained by a Monte Carlo simulation of the noisy and random inputs to the model neurons. Where it is possible, analytical calculations of probabilities and curves fitting are used.



Acknowledgements
This project was in part funded by Charles University graduate students research program, acronym SVV, No. 260 519/ 2022-2024, to Petr Marsalek
div.standard { margin-bottom: 2ex; }

References
Bures, Z. (2012). Biol. Cybern., 106(2): 111-122.

Bures, Z. and Marsalek, P. (2013). Brain Res., 1536:16-26.

Sanda, P., Marsalek, P. (2012). Brain Res., 1434: 257-265.

Marsalek, P., Sanda, P., Bures, Z. (2020). An arXiv pre-print. https://arxiv.org/abs/2007.00524


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P186: Spatiotemporal dynamics of FitzHugh-Nagumo based reservoir computing networks for classification tasks
Monday July 7, 2025 16:20 - 18:20 CEST
P186 Spatiotemporal dynamics of FitzHugh-Nagumo based reservoir computing networks for classification tasks

Oleg V. Maslennikov*1, Dmitry S. Shchapin1, Vladimir I. Nekorkin1

1Department of Nonlinear Dynamics, Gaponov-Grekhov Institute of Applied Physics of the RAS, Nizhny Novgorod, Russia


*Email: olegmaov@gmail.com

Introduction

The paradigm of computation through dynamics is highly influential within thecomputationalneuroscience community, as it elucidates how interacting neural elements give rise to specific sensory, motor, and cognitive functions[1-3]. This framework's findings are also pivotal for advancements in artificial intelligence and are of particular interest from a nonlinear dynamics perspective[4]. This paradigm is primarily based on recurrent neural networks (RNNs), which, unlike feed-forward networks, do not simply map inputs to outputs but instead rely on their intrinsic dynamic state.

Methods
One influential approach for designing and training RNNs is reservoir computing (RC), which was proposed over two decades ago[5]. RC modifies only the output weights while keeping the recurrent weights fixed. RNNs are not only models for engineering applications but also fundamental tools for understanding basic cognitive functions that emerge from brain dynamics. From a dynamical systems perspective, their performance is closely related to the basic dynamic regime.An interesting approach relies on models traditional to computational neuroscience communitysuch as spiking dynamical neurons.
Results
In this study, we investigate networks composed of coupled FitzHugh-Nagumo(FHN)neurons and examine their capabilities for classification tasks. The neurons within these networks are interconnected via fixed electricalsynapses, and the output weights are trained within the reservoir computing framework. We utilize two-feature synthetic datasets for binary classification as inputs to our RNNs, where the output units read out neural activity to indicate the class. We employ several encoding schemesincluding time-to-first-spike and rate-basedto generate spiking patterns from static two-dimensional inputs and analyze how neural dynamics influence the performance of classification tasks.We show that, the nonlinear processing capabilities of FHNneuronsenable effective handling of complex signalssuch as discrimination of linearly inseparable classes.
Discussion
The integration ofFHNneurons into reservoir computing frameworks offers a powerful approach for tackling complex computational tasks. The model's inherent nonlinear dynamics, coupled with its ability to operate near criticality, enhances the performance and robustness of RC systems.Our resultshighlighted the efficiency of FHN-based reservoirs in achieving high classification accuracy while maintaining a manageable computational load. As research progresses, the application of these biologically inspired models is expected to expand across various fields, including robotics, neurophysiology, and artificial intelligence.





Acknowledgements
This work was supported by the Russian Science Foundation, grant No 23-72-10088.
References
1.Vyas, S., Golub, M. D., Sussillo, D., & Shenoy, K. V. (2020). Computation through neural population dynamics.Annual review of neuroscience,43(1), 249-275.
2.Barak, O. (2017).Current opinion in neurobiology,46, 1-6.
3.Sussillo, D. (2014).Current opinion in neurobiology,25, 156-163.
4.Ramezanian-Panahi, M., Abrevaya, G., Gagnon-Audet, J. C., Voleti, V., Rish, I., & Dumas, G. (2022).Frontiers in artificial intelligence,5, 807406.
5.Lukoševičius, M., & Jaeger, H. (2009).Computer science review,3(3), 127-149.
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P187: Climbing fiber impact on human and mice Purkinje cell spines
Monday July 7, 2025 16:20 - 18:20 CEST
P187 Climbing fiber impact on human and mice Purkinje cell spines

Stefano Masoli*1, Egidio D'Angelo1,2

1 Department of Brain and Behavioral Science, University of Pavia, Pavia, Italy
2Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: stefano.masoli@unipv.it

Introduction

Purkinje cells (PC) are one of the most complex neurons of the nervous system and can integrate multiple inputs through their dendritic tree dotted by tens of thousands of dendritic spines. Two excitatory pathways make synapses with PC spines: one is transmitted by granule cells (GrC) through their ascending axons (aa) and parallel fibers (pf), and the second one by climbing fibers (cf) originating from the inferior olive nucleus. The impact of pfs activity on PCs was studied with a multi-compartmental model [1]. It was later improved with human and mice morphologies and dendritic spines [2]. The cfs impact on PCs is still highly debated, which prompted their study using the latest PC models with the most up-to-date experimental information.
Methods


A mice and human PC models with active spines [2] were expanded with five ionic channels type and location based on immunohistochemical papers. Dendritic spines were also improved based on the latest experimental data [3]. AMPA and NMDA receptors were tuned to generate a fast paired pulse depression. The cf synapses were distributed on spines belonging to the territory between pfs and the aspiny trunks. The synaptic impact was tested with cfs alone at various frequencies and with pfs too. Because of the massive number of sections involved, the simulations were performed with 48 cores on an AMD Threadripper 7980X. The simulation environment was NEURON 8.2.4 [4] and Python 3.10.16.
Results
The models reproduced similar intrinsic and synaptic properties showed in previous PC models [1,2,5]. The stimulations were performed with burst composed of 3 spikes every 6 ms (180Hz). The number of spines required in the generation of a complex spike was estimated to be 600 in mice and 1500 in humans. With these numbers, the mouse model showed the typical complex spike shape. Instead, the human model could not generate such a response because of the three distinct trees. A distributed approach, with a single cf for each tree [6] showed results similar to the mouse. The synchronous activation of pfs and cfs showed localized calcium increase in the spines near the stimulation sites.
DiscussionValidated multi-compartmental models built in Python/NEURON can allow the exploration of behaviours that are not yet available from experimental techniques. The complex spike recorded in the mouse model matched multiple published papers. Instead, human recordings of this response is not yet viable. The model showed that the calcium in each separate trunk required a single cf to generate correct responses.This was proposed by a recent paper [6] and with a single cf terminal for each main trunk, the simulations showed results in line with the mouse model. Activation of multiple cfs at the same, on the same human morphology, in connection with the burst pause behavior, can generate an extensive parameter space.



Acknowledgements
This project/research received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Framework Partnership Agreement No. 650003 (HBP FPA).


References

1.https://www.doi.org/10.3389/fncel.2017.00278
2.https://www.doi.org/10.1038/s42003-023-05689-y
3.https://www.doi.org/10.1101/2024.09.09.612113
4.https://www.doi.org/10.3389/neuro.11.001.2009
5.https://www.doi.org/10.3389/fncel.2015.00047
6.https://www.doi.org/10.1126/science.adi1024


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P188: Neural coding of subthreshold sinusoidal inputs into symbolic temporal spike patterns
Monday July 7, 2025 16:20 - 18:20 CEST
P188 Neural coding of subthreshold sinusoidal inputs into symbolic temporal spike patterns


Maria Masoliver1,Cristina Masoller*1
1Departmento de Física, Universitat Politecnica de Catalunya, Terrassa, Spain
*Email: cristina.masoller@upc.edu


Introduction

Neuromorphic photonics is a new paradigm for optical computing that can revolutionize the fields of signal processing and artificial intelligence.To develop photonic neurons able to process information as sensory neurons do, we need to identify excitable lasers able to emit pulses of light (optical spikes) that similar to neuronal spikes, and implement in these lasers the neural coding mechanisms used by neural systems to process information, in particular, the neural coding mechanisms used to process weak external inputs in noisy environments.
Methods
We use thestochastic FitzHugh Nagumo model to simulate spike sequences fired in response to weak (subthreshold) sinusoidal signals.We also use this model to simulate the activity of a population of neurons, when they all perceive the same subthreshold sinusoidal input. We use a symbolic time series analysis method, known as ordinal analysis [1], to analyze the sequences of inter-spike-intervals.

Results

In the analysis of the spikes of single neurons, we found that the probabilities of the symbols (ordinal patterns) encode information of the signal, because they depend on both, the amplitude and the frequency of the signal.
In the analysis of the spike generated by a population of neurons, we have also found that the ordinal probabilities encode information of the amplitude and of the period of the signal that is perceived by the neurons. We found that neuronal coupling benefits signal encoding because groups of neurons are able to encode a small-amplitude signal that cannot be encoded when it is perceived by just one or two neurons. Interestingly, we found that for a population of neurons, just a few random links between them can significantly improve signal encoding.
Discussion

We have found that the probabilities of spike patterns in spike sequences may encode information of a weak (subthreshold) input perceived by the neurons.
An open question is whether this coding mechanism can be implemented in excitable lasers that emit pulses of light (optical spikes) whose statistical properties are similar to neuronal spikes.Using ordinal analysis and machine learning, we have found that the sequences of optical spikes emitted by a laser diode in response to low or high frequency signals are located in different regions of a 3D feature space, suggesting that information about the frequency of the input signal, can be recovered from the analysis of the emitted optical spikes [3].





Figure 1. Left: Optical spikes emitted by an excitable laser (nanosecond time scale); right: neuronal spikes simulated with the FitzHugh Nagumo model (millisecond time scale).
Acknowledgements
Ministerio de Ciencia, Innovación y Universidades (No. PID2021-123994NB-C21), Institució Catalana de Recerca i Estudis Avançats (ICREA Academia), Agencia de Gestió d’Ajuts Universitaris i de Recerca (AGAUR, No. 2021 SGR 00606).
References
[1] Bandt C, Pompe B. (2002). Permutation entropy: a natural complexity measure for time series. Phys. Rev. Lett., 88, 174102.https://doi.org/10.1103/PhysRevLett.88.174102


[2] Masoliver M, Masoller C (2020). Neuronal coupling benefits the encoding of weak periodic signals in symbolic spike patterns. Commun. Nonlinear Sci. Numer. Simulat. 88, 105023.https://doi.org/10.1016/j.cnsns.2019.105023


[3] Boaretto BRR, Macau EEN, Masoller C (2024). Characterizing the spike timing of a chaotic laser by using ordinal analysis and machine learning, Chaos 34, 043108.https://doi.org/10.1063/5.0193967


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P189: Elementary Dynamics of Neural Microcircuits
Monday July 7, 2025 16:20 - 18:20 CEST
P189 Elementary Dynamics of Neural Microcircuits

Stefano Masserini*1,2,3, Richard Kempter1,2,3

1 Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
2Bernstein Center for Computational Neuroscience, Berlin, Germany
3Charité-Universitätsmedizin Berlin, Einstein Center for Neurosciences, Berlin, Germany

*Email: stefanomasse@gmail.com

Introduction

Cell type diversity is a major direction in which systems neuroscience has expanded in the last decade, as networks of excitatory (E) and inhibitory (I) neurons enriched with specific neuronal populations with their own distinct role in the network dynamics. These advances have mostly been driven by new experimental techniques, often inspiring circuit-specific modeling, even when stark similarities across cortical areas would have allowed describing the dynamics of these microcircuits with a more general mathematical language. Steps toward a general description have been taken by using linear approximations to understand how connectivity shapes responses to perturbations from within or outside the network [1,2].

Methods
In this work, we expand on these findings by studying microcircuit dynamics in the simplest nonlinear model, the threshold-linear network (TLN), and generalize insights originally obtained for all-inhibitory TLNs [3]. This model greatly extends the dynamical repertoire of purely linear networks, by allowing for oscillations and multistability. On the other hand, it retains the simplicity of linear models, since the conditions for each nonlinear regime can be computed in closed form and intuitively interpreted in terms of input and connectivity requirements. With this tool, we not only map previously unrelated systems neuroscience hypotheses to a common reference space, but also gain new insights into specific circuits across the brain.
Results
Namely, we compare balancing strategies in inhibition-stabilized E-I networks and discuss different types of bistability in hippocampal E-I-I networks. We then examine the conditions for gamma oscillations in the canonical circuit (Fig. 1A), providing a mechanistic explanation for the opposing effects of PV and SOM interneurons [4]. In E-E-I circuits, we show that connectivity determines three fundamentally different types of assembly interactions, while in E-E-I-I circuits we find that balanced clustering prevents coordinated inputs to one E-I unit from exerting lateral inhibition (Fig. 1B), while opponent clustering can induce competition even between strongly coupled E assemblies, resulting in different bistable configurations (Fig. 1C).
Discussion
While TLNs have so far not been regarded as a standard rate model for neural populations, these applications show that they can provide interpretable conditions even for the emergence of complex dynamical landscapes. These conditions should be taken into account by future modeling work on neural microcircuits, at least as a benchmark to determine whether additional complexity is necessary to explain their dynamics of interest. The simple structure of this model is also amenable to the addition of variables representing synaptic plasticity or slow adaptive currents. TLNs can also be directly compared to spiking networks, for example because they are the first-order mean-field limit for networks of Poisson neurons [5].




Figure 1. (A) Canonical circuit. (Aii-iii) Oscillation coherence. (Aiv) Effects of impairing SOM or PV (matching shading). (B-C) EEII network. (Bii-iii) Firing modulation wrt bottom left point. (Biv) Modulation example (shaded area). (Cii) Dynamical landscape, smaller regions are EII or EEI bistability. (Ciii) Lateral inhibition by either inputs to E1 or I1. (Civ) EI bistability. Input to I1 induces switch.
Acknowledgements
The authors thank Gaspar Cano, Carina Curto, Atilla Kelemen, John Rinzel, Archili Sakevarashvili, and Tilo Schwalger for insightful discussions about this study. Founding source: German Research Foundation, project 327654276--SFB 1315.
References
[1]https://doi.org/10.1101/2020.10.13.336727
[2] https://doi.org/10.1073/pnas.231104012
[3] https://doi.org/10.48550/arXiv.1804.00794
[4]https://doi.org/10.1038/nn.4562
[5] https://doi.org/10.48550/arXiv.2412.16111
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P190: Simulating healthy and diseased behaviour using spiking neurons
Monday July 7, 2025 16:20 - 18:20 CEST
P190 Simulating healthy and diseased behaviour using spiking neurons

Mavritsaki E.1,3, Klein, J.1, Porwal, N.1, Allen, H.A.2, Bowman, H.3, Amanatidou, V.4, Cook, A.1, Clibbens, J.1and Lintern, M.1

1College of Psychology, Birmingham City University, Birmingham, UK
2School of Psychology, University of Nottingham, Nottingham, UK
3School of Psychology, University of Birmingham, Birmingham, UK
4Worcestershire Health and Care Trust, UK

*Email: eirini.mavritsaki@bcu.ac.uk

Introduction

Spiking neural networks have proven highly effective in simulating both healthy and diseased neural behaviour. They offer researchers the opportunity to simultaneously study behaviour and understand its relationship with the underlying biological properties of the system. The approach is particularly valuable as these networks accurately mimic real neuronal communication, providing a more biologically accurate model compared to traditional methods, allowing researchers to analyse time-dependent patterns and providing deeper insights into neural dynamics and cognitive processes. Consequently, spiking neural networks have become an invaluable tool for advancing brain studies and neurological research. In this work, we present two studies utilizing spiking neural networks extending our previous work using the spiking Search over Time and Space (sSoTS) model.
Methods
The sSoTS model is a spiking neural model incorporating a fast excitatory AMPA recurrent current, a slow excitatory NMDA current, an inhibitory GABA current, and aIAHPslow[Ca+]activatedK+current.We built upon our previous research in visual search (Mavritsaki et al., 2011; Mavritsaki & Humphreys, 2016) to simulate behavioural findings in our lab in attention between adults, children and children that score high in Conners 3AI, testing ADHD. We also build upon our previous Alzheimer’s work (Mavritsaki et al., 2019) to simulate N400 and P600 components in the semantic category judgment task (Olichney et al., 2000), which has been used to track ERP changes in patients progressing through MCI to mild AD. Please see figure 1.

Results
Results from our visual search paradigm demonstrate that reducing coupling between neurons in the model successfully simulates the differences between adults and children. Furthermore, our findings suggest that temporal binding between feature items may be a key mechanism underlying differences observed between healthy children and those scoring high on the Conners 3AI test, as reducing this parameter in the model reproduced the observed differences. In our Alzheimer's work, we simulated the biomarkers found with the N400 and P600 ERP components by modelling the semantic category judgment task and modifying parameters related to pathological ionic, neurotransmitter, and atrophy modulations.

Discussion
Results from both studies demonstrate the importance of using spiking neural networks in computational modelling, as they provide valuable insights into brain functions, link different methodologies, and help understand changes that occur in diseased brains. Our Alzheimer's work shows that the disease's pathology can be measured through N400 and P600 congruency effects, thus validating ERPs as biomarkers for AD. Our visual search and ADHD work identifies the crucial role of binding in visual search and provides valuable insights into the ADHD condition that can support updates to the diagnostic criteria for ADHD.





Figure 1. The top part of the figure illustrates the key neuronal properties of the spiking neural network model. The bottom left panel shows the network connectivity implemented to simulate the semantic category judgment task in our Alzheimer's disease study, while the bottom right panel depicts the neural network configuration used to simulate visual search task performance in our ADHD behavioural study.
Acknowledgements
The computations described in this paper were performed using the University of Birmingham's BEAR Cloud service, which provides flexible resource for intensive computational work to the University's research community. Seehttp://www.birmingham.ac.uk/bearfor more details.
References
Mavritsaki, E., Bowman, H., & Su, L. (2019). Springer Intern.Publishing. https://doi.org/10.1007/978-3-030-18830-6_11
Mavritsaki, E., Heinke, D., Allen, H., Deco, G., & Humphreys, G. W. (2011). Bridging the Gap Between Physiology and Behavior:Psych.l Review,118(1), 3–41. https://doi.org/10.1037/A0021868
Mavritsaki, E., & Humphreys, G. (2016).Journal of Cognitive Neuroscience,28(10). https://doi.org/10.1162/jocn_a_00984

Olichney, J. M., Van Petten, C., Paller, K. A., Salmon, D. P., Iragui, V. J., & Kutas, M. (2000).Brain,123(9), 1948–1963. https://doi.org/10.1093/brain/123.9.1948
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P191: "Brain Fluidity as a Biomarker for Alzheimer's Disease: Linking Network Dynamics to Clinical Disability Prediction"
Monday July 7, 2025 16:20 - 18:20 CEST
P191 "Brain Fluidity as a Biomarker for Alzheimer's Disease: Linking Network Dynamics to Clinical Disability Prediction"

Camille Mazzara*1,2,3, Gian Marco Duma4, Giuditta Gambino5, Giuseppe Giglia5, Michele Migliore2, Pierpaolo Sorrentino3,6,7
1.Department of Promoting Health, Maternal-Infant. Excellence and Internal and Specialized Medicine (PROMISE) G. D’Alessandro, University of Palermo, Palermo, Italy.
2.Institute of Biophysics, National Research Council, Palermo, Italy.
3.Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France.
4.IRCCS E. Medea Scientific Institute, Conegliano, Treviso, Italy
5.Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy.
6.Institute of Applied Sciences and Intelligent Systems, National Research Council, Pozzuoli, Italy
7.University of Sassari, Department of Biomedical Sciences, Viale San Pietro, 07100, Sassari, Italy.
*email: camille.mazzara@ibf.cnr.it

Introduction

Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by progressive cognitive decline and large-scale network dysfunction[1]. While amyloid-beta (Aβ) and tau pathology are well-documented [2,3], the network-level mechanisms linking neuronal degeneration to cognitive impairment remain poorly understood. Traditional functional connectivity (FC) analyses provide static representations of brain networks, failing to capture their intrinsic dynamics[4]. We proposebrain fluidity, a metric that quantify network flexibility, as a potential biomarker reflecting AD-related disruptions in brain dynamics.


Methods
After preprocessing and source reconstruction, we analyzed resting-state EEG data from 28 AD patients and 29 healthy controls. Brain fluidity was quantified measuring the variability of functional connectivity over time, reflecting how interregional synchronization evolves. We assessed its relationship with established AD biomarkers, including cerebrospinal fluid (CSF) levels of Aβ42, phosphorylated tau (p-tau), and total tau (t-tau). Additionally, we examined associations between brain fluidity and cognitive performance (Mini-Mental State Examination (MMSE)). Statistical analyses included between-group comparisons and regression models to determine the predictive value of fluidity in tracking disease severity.
Results
Fluidity analysis across frequency bands (theta, alpha, beta, gamma) revealed significant differences in AD patients (Fig.1a). In the theta band (4–8 Hz), fluidity was higher in AD compared to controls, while in the beta band (14–30 Hz), fluidity was lower. Correlation analyses showed no significant associations between theta fluidity and clinical measures. However, beta fluidity negatively correlated with tTau and pTau (Fig.1c), suggesting a link to neurodegeneration. Notably, no significant associations were found between fluidity and Aβ levels. Using a multilinear regression model we also found that adding fluidity calculated in the beta band significantly improved the predictive power for clinical disability.
Discussion
This results could imply that changes in the ability of the brain to flexibly switch between different dynamic states are associated with neurodegenerative processes, specifically tau-related damage. Reduced brain fluidity in beta may reflect underlying neurodegenerative processes, providing insights into the functional consequences of neuronal loss. Given its sensitivity to AD-related changes, brain fluidity may serve as a promising biomarker for tracking disease progression and evaluating treatment efficacy in clinical settings.





Figure 1. Fig.1 a) Fluidity for each frequency band in AD and control groups. b) dFC matrices averaged across AD (left) and control (right), computed in theta (top) and beta (bottom). c) Correlation between beta-band fluidity and tTau (left), pTau (center), Aβ (right), with significant links to tTau (p = 0.03) and pTau (p = 0.01), but not Aβ42.
Acknowledgements

References
Bibliography
1.https://doi.org/10.1016/j.lfs.2020.117996
2.https://doi.org/10.1590/S1980-57642009DN30300003

3.https://doi.org/10.7554/eLife.98920.1
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P192: Identifying Cell-Type-Specific Alterations Underlying Schizophrenia-Related EEG Deficits Using a Multiscale Model of Auditory Thalamocortical Circuits
Monday July 7, 2025 16:20 - 18:20 CEST
P192 Identifying Cell-Type-Specific Alterations Underlying Schizophrenia-Related EEG Deficits Using a Multiscale Model of Auditory Thalamocortical Circuits

Scott McElroy*1,2, James Chen1,2, Nikita Novikov1,2,3, Pablo Fernández-López4, Carmen Paz Suárez-Araújo4, Christoph Metzner5, Daniel Javitt3, Sam Neymotin3, Salvador Dura-Bernal1,2,3
1Global Center for AI, Society and Mental Health, SUNY Downstate Health Sciences University, Brooklyn, United States of America
2Department of Physiology and Pharmacology,SUNY Downstate Health Sciences University, Brooklyn, United States of America
3Center for Biomedical Imaging & Neuromodulation, Nathan Kline Institute, Orangeburg, United States of America
4Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Gran Canaria, España
5Technische Universität Berlin, Berlin, Germany


*Email: scott.mcelroy@downstate.edu
Introduction
Schizophrenia is associated with cognitive deficits, including disruptions in sensory processing. Electroencephalography (EEG) studies have identified abnormalities in event-related potentials and cortical oscillations, particularly within the auditory system. Among the most well-established EEG biomarkers are the reduced 40 Hz Auditory Steady-State Response (ASSR) and impaired mismatch negativity (MMN). Understanding the neural mechanisms underlying these EEG deficits is critical for linking molecular and circuit-level alterations to cognitive dysfunctions in schizophrenia.Methods
We extended our computational model of auditory thalamocortical circuits to investigate the circuit-level mechanisms underlying schizophrenia-related EEG abnormalities1. The model simulates a cortical column with over 12,000 neurons and 30 million synapses, incorporating experimentally derived neuron densities, laminar organization, morphology, biophysics, and connectivity across multiple scales. Auditory inputs to the thalamus were modeled using a phenomenological cochlear representation, allowing for the reproduction of realistic physiological responses. Additionally, a more systematic approach to providing background network activity was implemented using Ornstein-Uhlenbeck (OU) processes to model time-varying, statistically independent somatic conductance injections.Results & Discussion
Our refinements enhance the physiological fidelity of EEG simulations, enabling improved replication of schizophrenia-related biomarkers. The integration of OU-modeled background activity ensures smoother, correlated variations in network input, leading to more biologically realistic fluctuations in neuronal dynamics. The OU process's mean and standard deviation are expressed as input conductance percentages for each cell type, linking them to intrinsic cellular properties. Additionally, we are developing an adaptive algorithm to dynamically calibrate population-specific OU parameters, ensuring model flexibility as it evolves. By incorporating experimentally observed molecular and genetic alterations, our model provides deeper insights into the neural basis of auditory processing deficits in schizophrenia and strengthens the link between cellular dysfunctions and EEG biomarkers.






Acknowledgements
This work is supported by NIBIB U24EB028998
References
1.10.1016/j.celrep.2023.113378
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P193: Brief Neurofeedback Training Increases Midline Alpha Activity in Default Mode Network
Monday July 7, 2025 16:20 - 18:20 CEST
P193 Brief Neurofeedback Training Increases Midline Alpha Activity in Default Mode Network

Matthew McGowan1, Alison Crilly1, Rongxiang Tang2,Yi-Yuan Tang*1

1College of Health Solutions, Arizona State University, Tempe, United States
2Department of Psychological & Brain Sciences, Texas A&M University, College Station, United States


*Email: yiyuan@asu.edu

Introduction

EEG Neurofeedback trains individuals to voluntarily modulate brainwave activity, promoting cognitive, emotional, and behavioral improvements by modulating large-scale brain networks and inducing neural plasticity [1]. While traditional neurofeedback protocols often require 20–40 sessions over several weeks or months, this study investigated whether a brief neurofeedback intervention—10 sessions over 2 weeks—could achieve similar neural regulation, particularly within the Default Mode Network (DMN).
Methods
To maximize the effects ofneurofeedback, we selected a protocol designed to reward frontal midline Theta (4–8 Hz) to enhance executive function and emotional balance, and central sensorimotor Rhythm (SMR, 12–15 Hz) to promote focus and calmness, while inhibiting posterior midline Beta (16–35 Hz) to reduce stress and improve sensory clarity. This protocol aims to enhance self-regulation, resilience, and overall brain efficiency, thereby facilitating neurofeedback learning and benefits.
Twenty participants with mild alcohol, tobacco, and/or cannabis use were recruited, and 19 provided usable data. Participants were instructed to complete each neurofeedback session with minimal effort to achieve the training goals. The NASA Task Load Index (NASA-TLX), a subjective workload assessment tool, was administered to 12 participants (11 with usable data) before and after 10 neurofeedback sessions. EEG recordings were taken before (T1) and after (T2) the training. The data were analyzed using Quantitative Electroencephalographic (QEEG) analysis, and paired t-tests were conducted to evaluate changes in brainwave patterns and neurofeedback workload (effort and mental demand).
Results
Quantitative EEG analysis revealed significant increases in frontal and posterior midline Alpha relative power (p= 0.011 andp= 0.013, respectively), alongside a significant decrease in the Theta/Alpha ratio (p= 0.047) and a significant increase in the Alpha/Beta ratio (p= 0.035). However, after neurofeedback, no significance in Theta and SMR power was detected, although a marginally significant reduction in Beta absolute power was found (p=0.074). Subjective workload assessments (NASA-TLX) indicated significant reductions in effort (p= 0.001) and mental demand (p= 0.0008).
Discussion
These findings suggest that brief neurofeedback training can enhance midline Alpha activity and modulate key neural frequency ratios, potentially improving DMN functional connectivity and promoting relaxation, self-reflection, and emotional regulation [2,3]. While preliminary, these results highlight the neuroplastic potential of short-term neurofeedback training, with implications for addressing DMN dysregulation in conditions such as substance use disorders, anxiety, and depression. Further research with larger samples is needed to understand the mechanisms and broader implications of these findings.




Acknowledgements
This work is supported by the ONR N000142412270 and NIH R33 AT010138.
References
1.Bowman, A. D., et al.(2017). Relationship between alpha rhythm and the default mode network: An EEG-fMRI study. J Clin Neurophysiol. 34(6), 527-533. https://doi.org/10.1097/WNP.0000000000000411
2.Tang, Y.Y.,&Posner, M. I. (2009). Attention training and attention state training. Trends Cogn Sci.13(5), 222–227.https://doi.org/10.1016/j.tics.2009.01.009

3. Tang, Y.Y., Tang, R,Posner, M. I.,&Gross, J. J. (2022). Effortless training of attention and self-control: mechanisms and applications. Trends Cogn Sci. 26(7), 567-577. https://doi.org/10.1016/j.tics.2022.04.006.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P194: Strategies for Neurofeedback Success: Exploring the Relationship Between Alpha Power and Mental Effort
Monday July 7, 2025 16:20 - 18:20 CEST
P194 Strategies for Neurofeedback Success: Exploring the Relationship Between Alpha Power and Mental Effort
Matthew McGowan1, Alison Crilly1, Rongxiang Tang2, Yi-Yuan Tang*1
1College of Health Solutions, Arizona State University, Tempe, United States2 Department of Psychological & Brain Sciences, Texas A&M University, College Station, United States
*Email: yiyuan@asu.edu
Introduction

EEG neurofeedback is a non-invasive neuromodulation technique that enables individuals to regulate brain activity through real-time feedback, promoting cognitive enhancement, emotional regulation, and adaptive brain plasticity. However, it remains unknown which regulation strategies lead to successful neurofeedback. Based on previous research, we hypothesize that effortless strategies (less mental demand and effort) produce neurofeedback success indexed by increased alpha activity in the default mode network [1,2].


Methods
To maximize the effects of neurofeedback, we selected a protocol designed to reward frontal midline Theta (4–8 Hz) to enhance executive function and emotional balance, and central sensorimotor Rhythm (SMR, 12–15 Hz) to promote focus and calmness, while inhibiting posterior midline Beta (16–35 Hz) to reduce stress and improve sensory clarity. This protocol was implemented with two eyes closed (soft music) and one with eyes open (nature scene) sessions. This protocol aims to enhance self-regulation, resilience, and overall brain efficiency, facilitating neurofeedback learning and benefits. This study examined the effects of 10 consecutive neurofeedback sessions reinforcing midline Theta and SMR while inhibiting high Beta in 12 participants(11 with usable data). Behavioral assessments included the NASA Task Load Index (NASA-TLX) and the Rating Scale for Mental Effort (RSME) to evaluate perceived mental workload alongside post-session interviews documenting self-regulation strategies.
Results
RSME results showed significant decreases in mental effort for all three protocols: p= 0.051, p= 0.015, and p=0.011, respectively (10 usable data). We also detected significant reductions in mental demand and effort on the NASA-TLX (p=0.0008, p=0.001 respectively). A negative correlation between posterior parietal alpha power and effort (r=-0.643, p=0.0327) was found, suggesting that higher alpha activity was associated with reduced cognitive workload. Correlation analysis indicated that participants with greater increases in posterior alpha power exhibited smaller reductions in perceived external demand (r=0.650, p=0.030), suggesting that neurofeedback training altered brain activity and reduced effort despite the persistence of task-related demand. Additionally, significant increases in frontal and posterior midline alpha power (p=0.011, p=0.013) suggested enhanced default mode network activity.
Discussion
These findings suggest that neurofeedback training promotes neural efficiency and cognitive ease, reinforcing the effectiveness of an effortless strategy for learning self-regulation of brain activity. By facilitating effortless engagement, neurofeedback may optimize neural adaptation, enhancing brain plasticity, cognitive efficiency, and self-regulation.



Acknowledgements
This work is supported by the ONR N000142412270 and NIH R33 AT010138.
References
1.Tang, Y.Y.,&Posner, M. I. (2009). Attention training and attention state training.Trends
Cogn Sci.13(5), 222–227.https://doi.org/10.1016/j.tics.2009.01.009

2. Tang, Y.Y., Tang, R,Posner, M. I.,&Gross, J. J. (2022). Effortless training of
attention and self-control: mechanisms and applications.Trends Cogn Sci.26(7),
567-577.https://doi.org/10.1016/j.tics.2022.04.006.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P195: Electrical coupling in thalamocortical networks cumulatively reduces cortical correlation to sensory inputs
Monday July 7, 2025 16:20 - 18:20 CEST
P195 Electrical coupling in thalamocortical networks cumulatively reduces cortical correlation to sensory inputs

Austin J. Mendoza1, Julie S. Haas*1

1Department of Biological Sciences, Lehigh University, Bethlehem PA


*Email: julie.haas@lehigh.edu

Introduction

Thalamocortical (TC) cells relay sensory information to the cortex, as well as driving their own feedback inhibition through their excitation of the thalamic reticular nucleus (TRN). The inhibitory cells of the TRN are extensively coupled through electrical synapses. Although electrical synapses are most often noted for their roles in synchronizing rhythmic forms of neuronal activity, they are also positioned to modulate responses to transient information flow across and throughout the brain, though this effect is seldom explored. Here we sought to understand how electrical synapses embedded within a network of TRN neurons regulate the processing of ongoing sensory inputs during relay from thalamus to cortex.
Methods
We utilized Hodgkin-Huxley point models to construct a network of a 9 TC and 9 TRN cells, with one cortical output neuron summing the TC activity. Pairs of TC and TRN cells were reciprocally coupled by chemical synapses. The TRN cells were each electrically coupled to two neighboring cells, forming a ring topology. Each TC cell received an exponential current input in sequence, with intervals between inputs varying from 10 to 50 ms across simulations. This architecture and sequence of inputs allowed us to assess the functional radius of an electrical synapse. We compared the cumulative effects of each additional TRN electrical synapse on modulating responses of the TRN and TC cells and the cortical output.
Results
Increasing coupling strength between TRN cells modulated TRN responses by decreasing spike latency and increasing duration of TRN spike trains. Effects were strongest for smaller intervals between inputs, and cumulative with additional synapses. In TC cells, we also observed changes in latency and duration of responses and decorrelation of the responses from the inputs. These effects were strongest for larger intervals between inputs and also increased with coupling strength. Coupling within TRN modulated cortical integration of TC inputs by increasing spike rate but reducing spike correlation to the input sequence that was presented to the TC layer. These effects were robust to additive noise.
Discussion
Here we show that TRN electrical synapses exert powerful influence on thalamocortical relay, unexpectedly reducing cortical output correlation to inputs presented to thalamus. We noted that effects of electrical synapses were cumulative. Coupling between pairs alone did not predict the effects seen in a network context, as coupling coefficient measured across multiple neurons drops to unmeasurable levels. These results show that multi-synaptic influences of electrically coupled cells should be included in more complex and realistic network topologies.




Acknowledgements

References

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P196: Kernel-based LFP estimation in detailed large-scale spiking network model of visual cortex
Monday July 7, 2025 16:20 - 18:20 CEST
P196 Kernel-based LFP estimation in detailed large-scale spiking network model of visual cortex



Nicolò Meneghetti1,2,*, Atle E. Rimehaug3, Gaute T. Einevoll4,5, Alberto Mazzoni1,2, Torbjørn V. Ness4


1The Biorobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
2Department of Excellence for Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
3Department of Informatics, University of Oslo, Oslo, Norway
4Department of Physics, Norwegian University of Life Sciences, Ås, Norway
5Department of Physics, University of Oslo, Oslo, Norway


*Email: nicolo.meneghetti@santannapisa.it


Introduction

Large-scale neuronal networks are fundamental tools in computational neuroscience. A key challenge in this domain is simulating measurable signals like local field potentials (LFPs), which bridge the gap between in silico model predictions and experimental data. Simulating LFPs in large-scale models, however, requires biologically detailed multicompartmental (MC) neuron models, which impose significant computational demands. To address this, multiple simplified approaches have been developed. In our work [1] we extended a kernel-based method to enable accurate LFP estimation in a state-of-the-art MC model of the mouse primary visual cortex (V1) from the Allen Institute [2], [3] while significantly reducing computational costs.

Methods
This V1 model features extensive biological detail, with over 50,000 MC neurons across six cortical layers[2], as well as experimentally recorded afferent inputs from both thalamic and lateromedial visual areas[3].
Instead of direct MC simulations, our method estimates the LFP by convolving population firing rates with precomputed spatiotemporal kernels (Fig. 1A), which represent the average postsynaptic LFP response to a presynaptic spike (see e.g., Fig.1B). This drastically reduced computational cost while maintaining estimation accuracy.
Results
The kernel method accurately estimated LFPs in both superficial (Fig. 1C) and deep layers (Fig. 1D). By treating LFPs as the sum of convolutions of neuronal firing rates and LFP-kernels, the method also enabled disentangling the contributions of different neuronal populations to the overall LFP. We found that V1 LFPs are primarily driven by external inputs, with thalamic afferents dominating in layer 4 (Fig. 1F) and lateromedial feedback influencing L2/3 layers (Fig. 1E). In contrast, local synaptic activity contributed minimally, challenging the conventional view that PV neurons are primary LFP drivers [4]. In fact, we showed that PV apparent influence on LFP reflects their correlation with external inputs rather than direct contribution.
Discussion
Our findings establish the kernel-based method as a robust and efficient tool for LFP estimation in large-scale network models. By significantly reducing computational costs, this approach makes detailed LFP simulations more practical while also providing insights into cortical LFP generation. Our results highlight the predominant role of external synaptic inputs, while challenging the conventional view that local network activity, including inhibitory interneurons, is a primary LFP driver. This methodology provides a useful framework for studying sensory processing and network dynamics in large-scale models, helping to clarify the contributions of different neuronal populations to cortical LFPs.




Figure 1. (A) Schematic of the kernel-based LFP estimation. (B) Set of kernels for computing L2/3 LFPs for different presynaptic families. (C) L2/3 LFPs computed with both MC simulations (red) and kernel convolution (black). (D) Same as C, for layer 4. (E) Cross-R² matrix between the total L2/3 LFPs and the LFP generated by the synaptic activity of each population in the model. (F) Same as C, for layer 4.
Acknowledgements
This work was supported by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MAD-2022-12376927 (“The etiopathological basis of gait derangement in Parkinson’s disease: decoding locomotor network dynamics”).
References
[1]https://doi.org/10.1101/2024.11.29.626029
[2]https://doi.org/10.1016/j.neuron.2020.01.040
[3]https://doi.org/10.7554/eLife.87169
[4]https://doi.org/10.1038/srep40211
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P197: Resilience of local microcircuitry firing dynamics to selective connectivity degeneration
Monday July 7, 2025 16:20 - 18:20 CEST
P197 Resilience of local microcircuitry firing dynamics to selective connectivity degeneration

Simachew Mengiste*1, Ad Aertsen2, Demian Battaglia1, Arvind Kumar3

1Functional System Dynamics / LNCA UMR 7364, University of Strasbourg, France
2BCCN / University of Freiburg, Germany
3KTH Royal Institute of Technology, Stockholm, Sweden

*Email: mengiste@unistra.fr


Introduction
Local microcircuit connectivity within local cortical microcircuits shapes spiking dynamics, influencing firing rate, synchrony, and regularity (thus information bandwidth). Often modeled as random and sparse (Erdös-Rényi, ER) or with small-world or scale-free properties, connectivity derived from detailed connectomic reconstructions (Egger et al., 2014) display dense cell clusters, diverging from mere randomness.
Neurodegenerative diseases (e.g., Alzheimer’s) induce neuronal and synaptic loss, disrupting dynamics. We systematically examine how pruning affects microcircuits with different topologies, revealing that resilience strongly depends on connectivity, with the "real connectome" being particularly robust.


Methods
We studied three random network topologies—Erdös-Rényi (ER), small-world (SW), and scale-free (SF)—plus a fourth based on real connectome (RC) reconstructions. Neurons were modeled as leaky integrate-and-fire units, with excitatory and inhibitory inputs shaping membrane potential dynamics.

Network degeneration was simulated via progressive pruning of synapses and neurons, using random or targeted sequences based on node degree or centrality. We analyzed firing rate, correlations, and spiking variability (coefficient of variation), alongside net synaptic currents received on average. Structural changes were assessed via graph metrics. We then systematically probed how firing dynamics evolved in the four ensembles along neurodegeneration.

Results
Using different network topologies and neurodegenerative strategies, we found that activity states were largely independent of topology across the different ensembles. Degeneration induced similar firing rate and synchrony variations across neurodegenerative schemes. We hypothesized that E-I balance changes, rather than topology, drove these dynamics. The effective synaptic weight (ESW) best predicted network activity, explaining firing rate, variability, and synchrony—except pairwise correlation, which depended on shared presynaptic neighbors and connection density. The real connectome (RC) followed similar ESW dependencies but exhibited broader stability ranges for all different firing parameters.

Discussion
While most neurodegeneration models focus on long-range connectivity changes, local microcircuits are also affected, altering synchrony and information processing. We find that local circuit dynamics are indeed disrupted, but less dependent on precise connectivity than expected. Instead, the effective synaptic weight (ESW) emerges as a stronger predictor of network behavior, making it a key measure for assessing function in both healthy and diseased states. The anomalous stability of firing parameters in networks with realistic connectivity suggests that microcircuit properties may have evolved to enhance functional resilience.




Acknowledgements
ANR PEPR Santé Numérique "BHT - Brain Health Trajectories"
ReferencesEgger, R., Dercksen, V. J., Udvary, D., Hege, H.-C., & Oberlaender, M. (2014). Generation of dense statistical connectomes from sparse morphological data.Frontiers in Neuroanatomy,8, 129. https://doi.org/10.3389/fnana.2014.00129


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P198: Binary Brains: Excitable Dynamics Simplify Neural Connectomes
Monday July 7, 2025 16:20 - 18:20 CEST
P198 Binary Brains: Excitable Dynamics Simplify Neural Connectomes

Arnaud Messé¹, Marc-Thorsten Hütt², Claus C. Hilgetag¹,³*
¹ Institute of Computational Neuroscience, Hamburg Center of Neuroscience, University Medical Center Eppendorf, Hamburg, Germany
² Computational Systems Biology, School of Science, Constructor University, Bremen, Germany
³ Department of Health Sciences, Boston University, Boston, MA, USA
--
*Email: c.hilgetag@uke.de


Introduction
Neural connectomes, representing the structural backbone of brain dynamics, are traditionally analyzed as weighted networks. However, the computational cost and methodological challenges of weighted representations hinder their widespread use. Here, we demonstrate that excitable dynamics—a common mechanism in neural and artificial networks—enable a thresholding approach that renders weighted and binary networks functionally equivalent. This finding simplifies network analyses and supports efficient artificial neural network (ANN) design by drastically reducing memory and computational demands.
Methods
We examined excitable network dynamics using a cellular automaton-based excitable model (SER model) and the FitzHugh-Nagumo model. By mapping the local excitation threshold onto global network weights, we identified a threshold at which binarized networks produce activity patterns statistically indistinguishable from those in weighted networks. Simulations were performed on synthetic networks, empirical structural brain connectivity data (MRI-derived), and artificial neural networks trained on the MNIST dataset. Computational efficiency was assessed in terms of memory usage and execution time.
Results & Discussion
Our findings [1] show that, under appropriate thresholding, binarized networks accurately reproduce coactivation patterns and functional connectivity observed in weighted brain networks. This effect holds across diverse network topologies and weight distributions, particularly for log-normal weight distributions found in empirical data. Computationally, binarized networks require significantly less memory and reduce processing times by orders of magnitude. These findings not only simplify empirical network analyses in neuroscience but also suggest a general principle for optimizing computational models in various domains, including machine learning, complex systems, and bio-inspired AI. Particularly in ANNs, thresholding maintains classification accuracy while drastically lowering the number of parameters, making binary networks a promising approach for efficient AI design.



Acknowledgements
The research was supported by the Deutsche Forschungsgemeinschaft (DFG) - SFB 936 - 178316478 - A1 & Z3, SPP2041 - 313856816 - HI1286/7-1, TRR 169 - 261402652 - A2, and the EU Horizon 2020 Framework Programme (HBP SGA2 & SGA3).
References
[1]https://doi.org/10.1101/2024.06.23.600265

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P199: Efficient slope detection with regular spiking and bursting point neuron models
Monday July 7, 2025 16:20 - 18:20 CEST
P199 Efficient slope detection with regular spiking and bursting point neuron models

Rebecca Miko*1, Marcus M. Scheunemann2,3, Volker Steuber1, Michael Schmuker1

1Biocomputation Research group, University of Hertfordshire, Hatfield, UK
2Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK
3Autonomy Department, Dexory, London, United Kingdom

*Email: rebeccamiko@outlook.com

Introduction

In real-world environments, odour stimuli exhibit a complex temporal structure due to turbulent gas dispersion, resulting in intermittent and sparse signals. These turbulence-induced fluctuations can be rapid yet contain valuable information crucial for locating odour sources. This ability is essential for both biological agents in foraging and mate-seeking behaviours, as well as robotic gas sensing in environmental and industrial monitoring. However, omnipresent turbulence destroys concentration gradients. Research suggests that the temporal dynamics of odour signals encode key information about the olfactory scene [1, 2].

Methods
Using the Izhikevich model [3], we develop neurons that spike at the rising edges (Fig. 1a) of naturalistic input signals across varying frequencies. We then compare these neurons to a two-compartmental model [4], which predominantly fires bursts at positive slopes in naturalistic inputs. By analysing the spiking behaviour of both models, we assessed whether bursting mechanisms are necessary for detecting odour signal dynamics
Results
Our findings indicate that a regular spiking neuron can effectively encode the slopes of input signals through discrete spike events, and that these detectors do not need to have the bursting mechanism (Fig. 1b). In contrast, the two-compartmental model [4] predominantly fires bursts in response to rising signal slopes, while the Izhikevich model generates single spikes at these transitions while maintaining computational efficiency. This demonstrates that a simple spiking neuron can capture key temporal features of odour signals without complex bursting dynamics.
Discussion
These results suggest that detecting odour signal slopes does not require burst firing. Instead, regular spiking neurons can efficiently encode temporal features of turbulent odour signals. Given the computational efficiency of the Izhikevich point neuron model, our findings offer potential applications in robotic gas navigation, where rapid and accurate data processing is crucial. By leveraging simple neural mechanisms, future research can explore bio-inspired gas-sensing systems for environmental and industrial monitoring.




Figure 1. Top trace: Gaussian white noise input nA (5 Hz; µ = .006; σ = .015). Bottom trace: membrane potential mV response. Top panel: neuron has parameters {a:0.01,b:0.2,c:- 35,d:5.0}. Asterisks mark burst onsets (grey dotted lines added for clarity). Bursts are defined by ISI ≤ 10 ms. No single spikes were produced. Bottom panel: neuron has parameters {a:0.01,b:0.2,c:-50,d:8.0}. Asterisks mark spikes.
Acknowledgements
Funding received from the NSF/MRC NeuroNex Odor2Action programme 274 (NSF #2014217, MRC #MR/T046759/1).
References
[1] Schmuker, M., Bahr, V., & Huerta, R. (2016). Exploiting plume structure to decode gas source distance using metal-oxide gas sensors. Sensors and Actuators B: Chemical, 235, 636–646
[2] Ackels, T., Erskine, A., Dasgupta, D., et al. (2021). Fast odour dynamics are encoded in the olfactory system and guide behaviour. Nature, 593(7859), 558–563
[3] Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569–1572

[4] Kepecs, A., Wang, X. J., & Lisman, J. (2002). Bursting neurons signal input slope. Journal of Neuroscience, 22(20), 9053–9062
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P200: Modeling the response of cortical cell populations to transcranial magnetic stimulation
Monday July 7, 2025 16:20 - 18:20 CEST
P200 Modeling the response of cortical cell populations to transcranial magnetic stimulation

Aaron Miller1, Konstantin Weise1,2,Thomas R. Knösche*1,3


1Max Planck Institute for Human Cogntitive and Brain Sciences, Leipzig, Germany



2Leipzig University of Applied Sciences, Germany

3Technical University Ilmenau, Germany


*email: knoesche@cbs.mpg.de
Introduction

The response of cortical neurons to TMS depends on the locally induced electric field magnitude and direction as well as on the physiological, biophysical, and microstructural properties of the involved cells. Here, we provide a modeling framework to integrate standard neural population modeling with numerically estimated TMS induced electric fields, mediated by detailed information on cell morphology and physiology. We exemplify this framework for the stimulation of primary motor cortex (M1), giving rise to observable electromyographic recordings in muscles (motor evoked potentials – MEP) as well as fast activity volleys in the EEG (DI-waves).


Methods
The model comprises pairs of pre/postsynaptic neural populations and their generation of short-latency (<10ms) responses upon TMS. We focus on the generation of I-waves by activation of neurons that project to layer 5 (L5) corticospinal neurons. We use realistic compartment models to simulate spatiotemporal spiking dynamics on the axonal arbors of presynaptic neurons. This output is coupled into L5 cells according the morphologies of presynaptic axonal and postsynaptic dendritic trees. The resulting current entering L5 somata defines an average current input to a neural mass model. We explore their sensitivity towards model parameters using a generalized polynomial chaos (gPC) approach.

Results
Fig. 1A-C show the resulting modeling pipeline. The output activity of L5 neurons due to stimulation of upstream L2/3 neurons is presented in Fig. 1D. We observe a strong directional dependency at low and medium intensity, decreasing at higher intensities, which agrees with experimental and modeling results (Souza et al., 2022; Weise et al., 2023). A gPC surrogate of the activity function using 4000 model evaluations with random parameter distributions resulted in a normalized root mean square deviation of 1.9% tested against 1000 independent verification runs.
The average Sobol indices revealed the most influencing parameters and combinations thereof, i.e. E/I balance (42%), stimulation intensity (13%), and a combination of both (14%).




Discussion
The model provides the basis for modeling TMS evoked activity using parsimonious NMM with high biological detail. Previous coupling models were based on coarse approximations and ignored the complex mechanisms of how TMS activates neuronal populations. The model pipeline can also be adapted to other brain stimulation methods such as tDCS. The calculated surrogate models will be provided for download in order to allow efficient calculation of the input currents to L5 PC.






Figure 1. A: Parameters of the TMS induced electric field; B and C: Illustration of the model pipeline - induced e-field acts on terminals of presynaptic axons. Spreading of activity in axonal arbors is captured by the axonal delay kernel. The postsynaptic synapto-dendritic delay kernel accounts for extra position-dependent delay and yields current entering soma; D: Resulting input current to L5 PC over tim
Acknowledgements
The publication was supported by BMBF grant 01GQ2201 (KW, TRK).
References
K. Weise, T. Worbs, B. Kalloch, V.H. Souza, A.T. Jaquier, W. Van Geit, A. Thielscher, T.R. Knösche: Directional Sensitivity of Cortical Neurons Towards TMS Induced Electric Fields. Imaging Neuroscience 1: 1–22 (2023)


V.H. Souza, J.O. Nieminen, S. Tugin, L.M. Koponen, O. Baffa, R.J. Ilmoniemi: TMS with fast and accurate electronic control: Measuring the orientation sensitivity of corticomotor pathways. Brain Stimulation 15(2), 306–315 (2022)
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P201: HoloNeV: Holographic visualization tool for neural network activity
Monday July 7, 2025 16:20 - 18:20 CEST
P201 HoloNeV: Holographic visualization tool for neural network activity

Safeer A. Mirani1, Pirah Menon1, Rosanna Migliore2, Michele Migliore2, Beniamina Mercante2, Paolo Enrico1, Sergio MG Solinas1*


1Department of Biomedical Sciences, University of Sassari, Sassari, SARDINIA ITALY
2Institute of Biophysics, National Research Council, Palermo, Italy



*Email: smgsolinas@uniss.it

Introduction
Recent neuroscience initiatives have generated extensive data on neuroanatomy and brain function, enabling the development of detailed models of brain areas.While tools like NetPyne[1]and PyNN[2]facilitate network design on NEST[3]or NEURON[4], visualization tools are lagging behind. Emerging 3D holographic devices are bridging the digital and physical worlds instancing interactive virtual objects in the real world, i.e. Mixed Reality. Here, we introduce HoloNeV, a high-performance tool for visualizing and interacting with 3D neural network models. We validate the HoloNeV on a neuronal dataset derived from the CA1 region of the mouse hippocampus[5], encompassing the somata of pyramidal cells and interneurons divided into four neuronal layers.


Methods
We developed HoloNeV in Unity 3D (Version 2022), a game development platform designed for dynamic and high-resolution of complex animations to run on the Microsoft Hololens 2 headset using the Mixed Reality Toolkit for MR integration. We leverage on the technique of GPU instancing to enhance visualization performance, enabling the rapid rendering of numerous neuron representations simultaneously, up to 300.000 somata. We designed a custom low level code to control the data management in the GPU, a.k.a. shader, which supports GPU-accelerated rendering with stereo capabilities, transforming neuron positions and activations into an immersive visual experience.Tests were run on a workstation Intel Xeon w9-3495X with 128 GB RAM and GPU Nvidia RTX A5500.


Results
Following successful hardware integration and software development, the system was tested using the brain hippocampus dataset visualized through Microsoft HoloLens 2. The immersive 3D model allows exploration of neuronal organization in the CA1 region. The visualization includes a 3D distribution of neurons, density patterns, and layer-wise organization and is able to replay the neuronal activity from stored spike trains. While using standard state of art Unity tools the performance was 10 FPS, HoloNeV performance testing showed a mean frame rate of 97.8 FPS, ensuring a comfortable user experience in mixed reality.


Discussion

We introduce HoloNeV, a mixed-reality tool for visualizing neural network activity using an holographic headset. This system allows researchers to interact with neural networks in 3D while staying aware of their physical surroundings. Key innovations include stereo rendering optimization and fine hand-tracking for direct manipulation. Researchers can customize visualization parameters such as neuron size and density in real time. Although currently limited to representing the soma without axon and dendrite, future developments will address the full neuronal morphology, the configuration of neuronal parameters, real-time data streaming from HPC facilities, paving the way for new insights in neuroscience research.




Acknowledgements
Project IR00011 EBRAINS-Italy - Mission 4, “Istruzione e Ricerca” - Component 2, “Dalla ricerca all impresa” - Line of investment 3.1 of PNRR, Action 3.1.1 NextGeneration EU (CUP B51E22000150006) awarded to P. E. and S. S., the “FeNeL” project, PNRR M4.C2.1.1 – PRIN 2022 – No. 2022JE5SK2 – CUP G53D23000380006 awarded to S. S., Project “Numeracy in Aging (NiA) CUP J53D23017580001 awarded to P. E.
References
1. Dura-Bernal, S. et al. (2018) https://doi.org/10.1101/461137
2. Davison, A. P. (2008). https://doi.org/10.3389/neuro.11.011.2008
3. Gewaltig, M.-O., & Diesmann, M. (2007). https://doi.org/10.4249/scholarpedia.1430
4. Hines, M. (2009). https://doi.org/10.3389/neuro.11.001.2009

5. Gandolfi Daniela, et al. (2022).https://doi.org/10.1038/s41598-022-18024-y
Speakers
avatar for Rosanna Migliore

Rosanna Migliore

Researcher, Istituto di Biofisica - CNR
Computational NeuroscienceEBRAINS-Italy Research Infrastructure for Neuroscience    https://ebrains-italy.eu/
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P202: Implementation of an SNN-based LLM
Monday July 7, 2025 16:20 - 18:20 CEST
P202 Implementation of an SNN-based LLM

Tomohiro. Mitsuhashi*1, Rin. Kuriyama1, Tadashi. Yamazaki1

1Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan

*Email:m2431154@gl.cc.uec.ac.jp

Introduction

Large language models (LLMs) are indispensable in everyday life and business, yet their training and inference demand an enormous amount of electricity. A major contributor to this consumption is the extensive memory access in artificial neural network (ANN) models. A potential solution is to use neuromorphic hardware, which emulates dynamics of spiking neural networks (SNNs)[1]. SpikeGPT has been proposed as an SNN-based LLM[2]. However, not all components in SpikeGPT are implemented by spiking neurons. In this study, we aimed to implement a fully spike-based LLM based on the present SpikeGPT.

Methods
SpikeGPT consists of two blocks: Spiking RWKV and Spiking RFFN (Fig. 1A). These blocks consist of a component that performs analog computation and another component that converts the results into spike sequences by an SNN. We replaced the former component with an SNN by using the method proposed by Stanojevic[3], that uses spike timing information (Time-to-First Spike, TFS) (Fig. 1B), where an analog value is represented by the time that a neuron emits a spike for the first time. Eventually, we developed SNN-based RWKV and SNN-based RFFN (Fig. 1A). Moreover, nonlinear processes, including the calculation of an exponential function, were approximated using multi-layer SNNs, enabling the entire processing to be implemented solely with SNNs.
Results
We completed implementation of an SNN-based LLM, ensuring that both the RWKV and RFFN blocks are SNNs. Our SNN-based LLM should have generated the same sentences with the original SpikeGPT, but it generated completely broken sentences. We performed a quantitative comparison between analog computation values and approximated ones represented by spike timing, and found discrepancies between them. Namely, the nonlinear processes by SNNs did not work well. Then, we reverted SNN-based nonlinear processes with the original analog versions. We were able to obtain readable sentences, although the sentences were still different (Fig. 1C). Notably, we confirmed that each neuron emitted at most one spike during text generation (Fig. 1D).


Discussion
We implemented an SNN-based LLM that generates sentences. Nonetheless, our SNN-based nonlinear processes need to be improved for better approximation. One possible way is to set the temporal resolution of the SNNs much smaller for finer precision of analog values represented by TFS. Meanwhile, each neuron has at most one spike for each propagation, combining our model with neuromorphic hardware could lead to significant energy savings. These advances are expected to address challenges associated with energy-efficient LLMs.



Figure 1. Overview of our SNN-based LLM and sample results. (A) The architecture of SpikeGPT (left) and our model (right). (B) Schematic of the TFS approach, where the temporal difference between the time parameter and the spike time encodes an analog value. (C) A sample sentence generated by our model. (D) Raster plots for the SNN-based RWKV and SNN-based RFFN during token generation.
Acknowledgements
This study was supported by MEXT/JSPS KAKENHI Grant Numbers JP22H05161, JP22H00460.
References
1. Davies, M., et al. (2021). Advancing neuromorphic computing with Loihi: A survey of results and outlook.Proceedings of the IEEE, 109(5), 911–934.https://doi.org/10.1109/JPROC.2021.3067593


2. Zhu, R.-J., et al. (2024). SpikeGPT: Generative pre-trained language model with spiking neural networks.arXiv preprint.https://arxiv.org/abs/2302.13939


3. Stanojevic, A., et al. (2023). An exact mapping from ReLU networks to spiking neural networks.Neural Networks, 168, 74–88.https://doi.org/10.1016/j.neunet.2023.09.011
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P203: Reciprocity and Hierarchical Organization in the Resting-State Brain: Implications for Efficient Connectivity
Monday July 7, 2025 16:20 - 18:20 CEST
P203 Reciprocity and Hierarchical Organization in the Resting-State Brain: Implications for Efficient Connectivity

Guillermo Montaña-Valverde*1,2, Paula García-Royo2, Wolfram Hinzen1,3, Gustavo Deco2,3

¹ Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, 08018, Spain
² Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, 08018, Spain
³ Institució Catalana de Recerca i Estudis Avançats, ICREA, Barcelona, 08010, Spain

*Email: guillermo.montana@upf.edu


Introduction

While the brain is traditionally considered to have a strong hierarchical organization, our findings demonstrate that its structure is more flattened, facilitating more efficient information flow across the network [1]. Using a large resting-state and task-based fMRI dataset, we show that reciprocity – the tendency of a network to have bidirectional connections– strongly correlate with hierarchical organization. This suggests that the brain’s densely interconnected architecture flattens hierarchy, facilitating efficient information flow through shorter average path lengths and enhanced small-worldness. These results aligns with the idea that reciprocity enhances information flow via feedback connectivity [2]. This novel framework lays the foundation for our understanding of whole-brain functional dynamics [3].

Methods
We analyzed open-source fMRI data from the Human Connectome Project (HCP), comprising both resting-state and 7 task-based data from 1000 subjects. Generative effective connectivity (GEC) – an extension of the classic effective connectivity [4] –was estimated from whole-brain modeling for each subject in the DK80 parcellation [5], providing a directed weighted network [6,7]. Hierarchy was then determined by computing measures of coherence and trophic levels on the GEC (Fig. 1a). Reciprocity, defined as the fraction of total connection strength that is bidirectionally shared between regions, captures the balance between feedforward and feedback interactions [8] (Fig. 1b).
Results
We found that the brain in resting-state exhibits a high degree of reciprocity (0.93± 0.02, Fig. 1c), which shows a strong negative correlation with hierarchical coherence (corr=-0.97, p<0.001, Fig. 1d). Conversely, by artificially modulating for more asymmetric interactions, the hierarchy becomes more rigid (Fig. 1e). In addition, a more flattened hierarchy was associated with a shorter average path length (corr=0.97, p<0.001, Fig. 1f), higher average clustering coefficient (corr=-0.95, p<0.001, Fig. 1g), and increased small-worldness (corr=-0.99, p<0.001, Fig. 1h). Furthermore, decreased hierarchical coherence was observed during task performance (Fig. 1i).
Discussion
Overall, our results demonstrate that reciprocity plays a crucial role in shaping the brain’s hierarchical organization. The brain’s nature of high reciprocal connections facilitates information flow and integration, potentially optimizing cognitive processing both at rest and during task performance. On the contrary, a stronger hierarchy, reduces flexibility and adaptability, leading to a worsening in brain connectivity. For this reason, future research in this methodology should explore neuropsychiatric disorders, where changes in hierarchical organization of the brain may underlie altered brain processing. Ultimately, exploring whether targeted interventions that modulate reciprocity can restore optimal hierarchical organization and improve cognitive function.



Figure 1. A. The hierarchy was quantified measuring directedness based on trophic levels. B. Simplified representation of reciprocity. C. Reciprocity in the HCP resting-state dataset. D. Coherence and Reciprocity relation in HCP resting-state. E. Hierarchical representations for different reciprocities. F, G and H. Graph measures correlates with coherence. I. Coherence in 7 tasks compared to resting-state.
Acknowledgements
This study is part of the projectI+D+i Generación de ConocimientoPRE2020-095700, funded by MCIN/AEI/10.13039/501100011033.
References

DOI: 10.1038/s41583-023-00756-z

DOI: 10.1016/j.neuroimage.2020.117479

DOI: http://dx.doi.org/10.1038/s44220-024-00298-y

DOI: 10.1016/s1053-8119(03)00202-7

DOI: 10.1038/s41562-020-01003-6

DOI: 10.1016/j.neuron.2014.08.034

DOI: 10.1016/j.celrep.2020.108128
● DOI: 10.1038/srep02729




Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P204: Brain-like networks emerge from distance dependence and preferential attachment
Monday July 7, 2025 16:20 - 18:20 CEST
P204 Brain-like networks emerge from distance dependence and preferential attachment
Aitor Morales-Gregorio*1, Karolína Korvasová1

1Faculty of Mathematics and Physics, Charles University, Prague, Czechia

*Email: aitor.morales-gregorio@matfyz.cuni.cz
Introduction
Neurons in the brain are not randomly connected to each other. Neuronal networks have low density, high local clustering, short path lengths, heavy-tailed weight and degree distributions, and distance-dependent connection probability. These properties enable efficient information processing. However, standard network generating algorithms cannot produce networks to show all these brain-like properties. Here, we show that distance-dependent connection probability in combination with preferential attachment can generate brain-like networks that match the properties of the neuronal networks from six animal species: C. Elegans [1], Platynereis [2], Drosophila [3,4], Mouse [5,6], Marmoset [7], and Macaque [8].

Methods
Networks are created by iterative growth, mimicking how neurons would naturally grow. Neurons are randomly positioned inside a sphere, the distance between them calculated, and multiplied by the exponential kernel [9], thus creating the distance-dependent probability. An empty network is initialized, to which a new connection is added in each iteration drawn from the distance-dependent probability. The iteration stops when the target density is reached.
To achieve heavy-tailed distributions we study preferential attachment, i.e. a higher probability of connection for edges with high weight (weight-preferential) or nodes with high degree (degree-preferential).

Results
The neuronal networks of six animals have low density, high local clustering, short global path lengths, and heavy-tailed weight and degree distributions.
We show that distance dependence alone can create small-world networks with high clustering and short path lengths, but fails to produce heavy-tailed weight or degree distributions. Including weight-preferential attachment enables the creation of networks that also have heavy-tailed weight distributions, but not of the degrees. Finally, we show that degree-preferential attachment together with distance dependence produces brain-like networks that simultaneously have all the mentioned properties, and can match the experimentally measured networks of six different animal species.

Discussion
Our algorithm can match the properties of the neuronal networks of six different animals, suggesting these could be general principles of neural network development. It is well-known that neurons at large distances are less likely to be connected, in part because these connections are metabolically more expensive to establish and maintain than short-range ones. The large neuropil branching of some neurons increases the probability of connections with them, which we capture via the degree-preferential mechanism.
In conclusion, distance dependence and preferential attachment are biologically realistic mechanisms that can produce networks closely matching both invertebrate and vertebrate brains.




Acknowledgements
This work received funding from the Programme Johannes Amos Comenius (OP JAK) under the project 'MSCA Fellowships CZ - UK3' (reg. n. CZ.02.01.01/00/22\_010/0008220); and from Charles University grant PRIMUS/24/MED/007
References
[1] Varshney et al (2011) PLoS CB 7:e1001066
[2] Randel et al (2014) eLife 3:e02730
[3] Takemura et al (2013) Nature 500:175-181
[4] Scheffer et al (2020) eLife 9:e57443
[5] MICrONs Consortium et al (2021) bioRxiv 2021.07.28.454025
[6] Gămănuţ et al (2018) Neuron 97(3):698-715
[7] Majka et al (2020) Nature Communications 11:1133
[8] Markov et al (2014) Cerebral Cortex 24(1):17-36
[9] Ercsey-Ravasz et al (2013) Neuron 80(1):184-197
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P205: High-frequency oscillations in primate visual areas: Critical insights into neural population dynamics or mere spike artifacts?
Monday July 7, 2025 16:20 - 18:20 CEST
P205 High-frequency oscillations in primate visual areas: Critical insights into neural population dynamics or mere spike artifacts?
Katarína Studeničová*1, Aitor Morales-Gregorio1, Karolína Korvasová1

1Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic

*Email: katarina.studenicova@matfyz.cuni.cz


Introduction
Short bursts of high-gamma frequency oscillations (80-150 Hz) resembling hippocampal ripples were to date observed in several cortical areas [1-4]. However, their function and link to memory processes are unclear. The goal of our work is to describe the relationship between the high-gamma bursts, referred to as cortical ripples, and neuronal spiking activity in the same cortical location under different levels of drowsiness.
Methods
We analyze a dataset of the resting state activity consisting of 4 macaque monkeys ([5,6], and additional data provided by the authors), each having 16 Utah arrays implanted in visual areas V1, V2, V4, and IT. Monkeys sat in a dark room, and the vigilance of the recorded animals changed, ranging from fully alert to drowsy and light sleep. Raw traces were downsampled and filtered to the ripple band signal (80-150 Hz), and spikes were sorted. Short high-amplitude oscillatory bursts present in the ripple band, further referred to as cortical ripples, were detected by the standard double thresholding methods, and additionally confirmed by spectral analysis and surrogate methods.

Results
During the alert eyes open states without strong visual input, the dynamic of a network is unorganized in both space and time. However, with increasing drowsiness, the network falls into a global upstate-downstate regime. Upstates are strongly visible mainly in V1 and V2, and less organized in V4 and IT, possibly reflecting a different organizational structure of higher cortical areas. In all brain states, cortical ripples are accompanied by spiking activity. In general, spikes are locked to the phase of the ripple band. Most are locked to the trough, however, we also found cells preferring peaks of the oscillatory signal. We detail these findings further by describing a variety of spiking preferences with respect to the ripple band.

Discussion
To the best of our knowledge, we are the first to uncover the global organization of high-frequency oscillatory activity in the macaque visual areas during resting state, spanning large horizontal distances with intracortical recording precision. We prove the existence of cortical ripples in all the areas covered (previously literature only addressed V1 and V4) and describe the relationship between spikes and cortical ripples with respect to various brain states. We detail our findings by area-wise description, highlighting crucial differences. This work aims to bridge gaps between various recording techniques by providing a detailed view of network states underlying high-frequency oscillatory bursts.





Acknowledgements
This work received funding from the Charles University grant PRIMUS/24/MED/007; and the Programme Johannes Amos Comenius (OP JAK) under the project 'MSCA Fellowships CZ - UK3' (reg. n. CZ.02.01.01/00/22\_010/0008220).
References
[1] https://doi.org/10.1523/JNEUROSCI.0742-22.2022
[2] https://doi.org/10.7554/eLife.68401
[3] https://doi.org/10.1093/brain/awae159
[4] https://doi.org/10.1073/pnas.2210698120
[5] https://doi.org/10.1038/s41597-022-01180-1
[6] https://doi.org/10.1016/j.neuron.2024.12.003
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P206: Synaptic Topography Influences Dendritic Integration in Drosophila Looming Responsive Descending Neurons
Monday July 7, 2025 16:20 - 18:20 CEST
P206 Synaptic Topography Influences Dendritic Integration in Drosophila Looming Responsive Descending Neurons

Anthony Moreno-Sanchez*1, Alexander N. Vasserman1,HyoJongJang2, Bryce W. Hina2, Catherine R. von Reyn1 2, Jessica Ausborn1

1Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States.
2School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States.

*Email: am4946@drexel.edu
Introduction

Synapse organization plays a crucial role in neural computation, affecting dendritic integration and neuronal output [1, 2].InDrosophila melanogaster, visual projection neurons (VPNs) encode distinct visual features and relay retinotopic information from thelobulaandlobulaplate to descending neurons (DNs) in the central brain[3].DNsintegratespatially organizedvisualinformationfrom VPNsto elicitappropriatemotorresponses[4,5,6].However, the retinotopic organization of VPN-DN connections and its impact on dendritic integration remain unclear.Using electron microscopy (EM)data, computational modeling, and electrophysiology, we investigated how synaptic topography affects dendritic processing in looming-sensitive DNs.

Methods
We analyzed EM reconstructions ofDrosophilaVPN-DN circuits from theFull Adult Fly Brain(FAFB)dataset[7], using flywire.ai[8].Wedeveloped multicompartment models of 5 DNs with precise VPN synaptic locationsusingthe FAFB dataset.Using EM VPNmorphologies,we estimated the receptive fields of 6 VPN populations [4,9] and analyzed synapse organization on DN dendrites.We experimentally determined the spike initiation zone (SIZ), in our DNs of interest by tagging the endogenous voltage gated sodium channelpara.Passive properties of DNsweredeterminedusing whole-cell patch clamp electrophysiology data, by fittinghyperpolarizingexperimentalcurrent injections.Simulations were performed in the NEURONsimulationenvironment.
Results
VPN synapses formed spatially constrained clusters on DN dendritesbut lacked retinotopic organizationwithin the clusters.We found that DN morphologyand passive propertiesfilterexcitatory postsynaptic potentials (EPSPs)to achieve synaptic democracy,normalizingeach EPSPs impactattheSIZ.SimulationssuggestthatVPN synapsesfollow anear random distribution of synapsesavoiding tight clusters of synapses from individual neurons to avoid shunting.This synaptic topography,together with synaptic democracy,maintainsa linear relationship between synapse number and depolarization at the SIZ, both when activating individual VPNsand a small group of VPNs.
Discussion
DNs integrate retinotopic feature information from multiple VPN types, each targeting distinct dendritic regions.This organization strategy may enable DNs toselectively process visual features across the fly’s visual field for behavior-relevant computations.Our resultssuggestthat DN dendritic architecture and synaptic topography supports a quasi-linear integration model, in which synaptic democracy ensures consistent encoding of stimulus location via synapse numbers. These findings offer insights into synaptic organization principles andtheirrole in neural circuit function, highlighting the absence of retinotopic organization to prevent membrane shunting.



Acknowledgements
We thank Arthur Zhao forhelpwith the receptive field mapping, James M. Jeanne for help with the creation of dendrograms, and Thomas A. Ravenscroft for providing us with para-GFSTF tools for SIZ labeling.This study was supported in part by the National Institutes of Health (NINDS R01NS118562 to J.A. andC.R.v.R.), and the National Science Foundation (grant no. IOS-1921065 toC.R.v.R.).
References
1.doi:10.1038/s41583-020-0301-7
2.doi:10.1126/science.1189664
3.doi: 10.7554/eLife.21022
4.doi: 10.1038/s41586-023-05930-y
5.doi: 10.1038/nn.3741
6.doi: 10.1016/j.cub.2008.07.094
7.doi: 10.1016/j.cell.2018.06.019.
8.doi:10.1038/s41592-021-01330-0
9.doi: 10.7554/eLife.57685

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P207: From evolution to creation of spiking neural networks using graph-based rules
Monday July 7, 2025 16:20 - 18:20 CEST
P207 From evolution to creation of spiking neural networks using graph-based rules

Yaqoob Muhammad1, Emil Dmitruk1, Volker Steuber1, Shabnam Kadir*1

1Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom


*Email: s.kadir2@herts.ac.uk
Introduction

Predicting dynamics from synaptic weights of a neural network and vice versa is a difficult problem. There has been some success with Hopfield networks [1] and combinatorial threshold linear networks, which is a rate model [2], however not, to our knowledge, with spiking neural networks (SNNs). Usually, weights are obtained by a process of training, e.g. using STDP, surrogate gradient descent, evolutionary algorithms, etc.

In contrast, here we look at small spiking neural networks as initiated in [3] and formulate rules for the direct selection of both network topology and synaptic weights. This can reduce or eliminate the need for training or using genetic algorithms to derive weights.
Methods
We illustrate our approach on networks of minimal size consisting of Adaptive Exponential Integrate and Fire (AdEx) [6] neurons where the aim is for the network to recognise an input pattern of lengthkconsisting of distinct letters, following on from the work on [3]. The network must only accept this single pattern out ofkkpossible patterns. The network haskinterneurons and one output neuron.
In our initial experiments [4] a genetic algorithm was used to evolve both the topology and connection weights of SNNs encoded as linear genomes [5]. In [4] fork = 3, out of 100 independent evolutionary runs for 1000 generations each, 33 runs yielded perfect recognizers for a pattern of three signals.
Results
Fork > 6we used patterned matrices of the form seen in Figure 1. We have a tree with components consisting of leaves comprised of several nodes (matrix entries) which are all positive or negative, and a few key components that must take either maximal or minimal weights.
Using our new method we obtained networks that performed perfectly for up tok = 10(at the time of submission of this abstract). There appears to be no obstructions to the approach working for arbitrarily largek.
With randomly chosen weights andk = 6, evolution using a genetic algorithm took 500 gen-
erations before a perfect recogniser was found. In contrast, our approach using both handcrafted
topologies and weights required none or far fewer generations
Discussion
Our results are still very much conjectures based on observation, but they indicate that for SNNs there may be graph-based rules relating synaptic weights to function. The weights exhibit a relationship that is highly deterministic. Unlike in previous approaches, we do not require any restrictions on the form of the connectivity matrix, e.g. we do not need it to be symmetric as is required for stable fixed points of Hopfield networks, and we allow both excitatory and inhibitory connections, as well as autapses.
This is a first step towards developing a theory for modularity for SNNs, i.e. enabling the glueing of such networks whilst preserving properties, analogous to what was achieved for a variety of attractor types for CTLNs [2].



Figure 1. A) Sample connectivity matrix pattern for k = 10. The weights in the connectivity matrix have been ordered, with negative and positive weights being given by negative and positive integers respectively. B) Weights distribution (indexed by ordering from -25 to 30) in 10 matrices recognising the same pattern of length 10. C) Network activity for a sequence ABCDEFGHIJ - 10 interneurons and 1 output n
Acknowledgements
This research has received no external funding.
References
[1] https://doi.org/10.1073/pnas.79.8.2554
[2] https://doi.org/10.1137/22M1541666
[3] https://doi.org/10.1101/2023.11.16.567361
[4] https://doi.org/10.1162/isal_a_00121
[5] https://doi.org/10.1007/978-3-319-06944-9_10
[6] https://doi.org/10.1152/jn.00686.2005
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P208: Integration of a Purkinje cell model including morphological details with a bidirectional synaptic plasticity model
Monday July 7, 2025 16:20 - 18:20 CEST
P208 Integration of a Purkinje cell model including morphological details with a bidirectional synaptic plasticity model

Takeki Mukaida*1, Kaaya Akira1, Tadashi Yamazaki1

1Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan


*Email: takeki.mukaida@gmail.com
Introduction

Most neurons have large dendrites that span in space, on which various active ion channels are implemented. On the dendrites, synapses undergo plasticity depending on the postsynaptic membrane potential and calcium ion concentration. However, how the potential and concentration that can diffuse across dendrites affect the plasticity remains unresolved. To address this question, we used a multi-compartment Purkinje cell model that includes morphological details [1] and a biologically accurate plasticity model [2], and integrated them. Then, we performed a numerical simulation to examine the relationship between the spatial location and the directions of plastic change of synapses.

Methods
We used a multi-compartment Purkinje cell (PC) model, which comprises 1600 compartments classified into four types (soma, main dendrite, small dendrite, and spiny dendrite) [2]. To this model, we added compartments that represent spines. On each spine, we implemented a bidirectional synaptic plasticity model, composed of 18 differential equations, based on calcium ion concentration [1]. Sole activation of parallel fibers (PFs) increases the concentration slightly, resulting in long-term potentiation (LTP), whereas pained activation with a climbing fiber (CF) increases it largely, resulting in long-term depression (LTD).
Results
The maximum amplitude of excitatory postsynaptic currents (EPSCs) in each spine was investigated when PF and CF stimulation were applied. Spine compartments were attached to all 9 compartments of main dendrites and 165 compartments of smooth and spiny dendrites were randomly selected. PF stimulation was applied to all spines at 8 pulses of 150 Hz per second, whereas CF stimulation was applied only to the main dendrites at one pulse per second. After 300 seconds of stimulation, the maximum amplitude of EPSCs in each spine was measured. We observed that the maximum amplitude was lower than the initial value in spines close to the main dendrite but exceeded the initial value in spines far from the main dendrite (Fig. 1).
Discussion
The present result suggests that the direction of plasticity depends on the spatial location of the dendrites. Thus, the spatial location of spines that underwent either LTD or LTP implies the formation of clusters of spines that have the same direction of the plastic change. This may contribute to enhance the learning capability of a single neuron by harnessing the spatial distinction of the spines distributed across dendrites. Therefore, we will investigate whether neurons can use spatial shapes to realize complex learning such as pattern recognition and separation, while we will also incorporate experimental results to further enhance the learning capability.




Figure 1. Fig 1. The maximum amplitude of EPSCs in each spine after 300 seconds of stimulation.
AcknowledgementsThis study was supported by MEXT KAKENHI Grant Number JP22H05161.
References
● Pinto, T. M., Schilstra, M. J., Roque, A. C., & Steuber, V. (2020). Binding of Filamentous Actin to CaMKII as Potential Regulation Mechanism of Bidirectional Synaptic Plasticity by β CaMKII in Cerebellar Purkinje Cells. Scientific reports, 10(1), 9019. https://doi.org/10.1038/s41598-020-65870-9
● De Schutter, E., & Bower, J. M. (1994). An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. Journal of neurophysiology, 71(1), 375–400. https://doi.org/10.1152/jn.1994.71.1.375


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P209: Action Potential Propagation in Branching Axons
Monday July 7, 2025 16:20 - 18:20 CEST
P209 Action Potential Propagation in Branching Axons

Erin Munro Krull*1, Lucas Swanson1, Laura Zitlow1



1Ripon College, Ripon, WI, US
*Email:munrokrulle@ripon.edu



Introduction.Action potentials (APs) typically start near the soma and travel down the axon. Axons are not simple cables, and their morphology depends on the type of cell and whether there is axonal sprouting generally due to trauma [2]. Moreover, AP propagation down the entire axon is not always guaranteed and is known to depend on morphology [1, 3]. Predicting AP propagation in axons is a long-standing problem, where current theory can only predict propagation if axons are symmetric [4,5].



Methods.We use NEURON simulations to model AP propagation from an axon collateral to the end of the main axon. We vary the distance of the stimulated collateral to the initial segment (IS), and the lengths and distances of possible extra collaterals off the main axon. For each simulation, we find the threshold sodium conductance for AP propagation (gNaT).


Results.We show that the gNaT for axons with complex morphologies may be estimated linearly by the gNaT for simpler axons. For example, if we add an extra collateral then gNaT from the stimulated collateral goes up by a fixed amount: dgNaT. If we then estimate the effect of two extra collaterals by adding dgNaT for each collateral individually, the relative error is less than 0.7% (Fig. 1).


Discussion.This implies that we may predict whether an AP will propagate through a branching axon by simply adding the gNaT needed to propagate through a given path. Predictions for AP propagation using gNaT may give insight into the sodium conductance of an experimental cell as well as which cells may more easily propagate APs simply based on morphology. This work also gives insight into linearly decomposing results for a nonlinear PDE via a parameter.



Figure 1. Left) Calculated gNaT where the model has 2 extra branches with varying distance around the stimulated collateral’s location at 2 lambda from the IS. Right) Difference between calculated gNaT and estimated gNaT using data with no branches and 1 extra branch.
Acknowledgements
This research was supported by the NSF MSPRF, Beloit College Sanger Scholars, Beloit College Summer Scholars, and the Ripon College SOAR program.

References


https://doi.org/10.1523/JNEUROSCI.0891-17.2017

https://doi.org/10.1113/jphysiol.2002.037812

https://doi.org/10.1038/s41598-017-09184-3

https://doi.org/10.1017/CBO9780511623271


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P210: Biochemically detailed modelling of cortical synaptic plasticity: the effects of timing of neuromodulatory inputs on LTP/LTD
Monday July 7, 2025 16:20 - 18:20 CEST
P210 Biochemically detailed modelling of cortical synaptic plasticity: the effects of timing of neuromodulatory inputs on LTP/LTD

Tuomo Mäki-Marttunen*1, Verónica Mäki-Marttunen2

1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
2NORMENT, Division of Mental Health and Addiction, Oslo University Hospital, Institute of Clinical Medicine, University of Oslo, Norway

*Email: tuomo.maki-marttunen@tuni.fi
Introduction

Synaptic plasticity is a time-sensitive phenomenon. The timing of action potentials in pre- and postsynaptic neurons is well known to influence plasticity outcomes through Hebbian spike-timing-dependent plasticity mechanisms. It is also known that the neuromodulatory state of the neuron strongly affects plasticity [1]. However, it is not fully understood how the timing of neuromodulatory inputs to the cells affects plasticity outcomes [2]. Neuromodulatory activity is important for learning and memory consolidation [3], and thus, understanding how neuromodulatory inputs interact with neuronal activity will allow to gain a deeper view of how brain plasticity is regulated at a higher level [4-6].
Methods
Here, we use a multi-pathway model of synaptic plasticity in the cortex [7-8] to study the interaction between the timing of neuromodulatory and Ca2+inputs to the postsynaptic spine in shaping synaptic plasticity. We investigate how different forms of plasticity are affected by the exact timing of neuromodulatory inputs from the locus coeruleus, which is the main source of norepinephrine (NE) in the mammalian brain, relative to high-frequency Ca2+inputs.
Results
We show that when Ca2+inputs are followed by NE inputs, strong LTP can be observed, whereas LTD occurs when Ca2+inputs followed NE inputs. This effect is caused by a difference in the amount of cAMP produced and PKA activated between the two stimulation protocols: the Ca2+-> NE protocol induces strong PKA activation and GluR1 exocytosis, while the NE -> Ca2+protocol yields much smaller PKA activation.
Discussion
Animal studies suggest that the timing of fast activation of neuromodulatory centers is important [9] and may play a role in the oscillatory processes that underlie memory consolidation during sleep [10]. In addition, recent studies suggest that neuromodulatory activity at slower time scales during sleep presents a timed relation with oscillatory events underlying memory consolidation [11-12]. Our results suggest that a timely activation of locus coeruleus within a wave of brain activity can be crucial for the plasticity outcome, which can have important implications for our understanding of learning and memory consolidation.



Acknowledgements
Funding: Academy of Finland (330776, 358049). The authors also wish to acknowledge CSC Finland (project 2003397) for computational resources.
References

[1] https://doi.org/10.1016/j.neuron.2007.08.013
[2]https://doi.org/10.1038/s41583-020-0360-9
[3]https://doi.org/10.1016/j.neuron.2023.03.005
[4] https://doi.org/10.3389/fnsyn.2016.00038
[5] https://doi.org/10.3389/fncom.2018.00049
[6] https://doi.org/10.3389/fncir.2018.00053
[7] https://doi.org/10.7554/eLife.55714
[8]https://doi.org/10.1073/pnas.231251112
[9] https://doi.org/10.1016/j.conb.2015.07.004
[10] https://doi.org/10.1093/cercor/bhr121
[11] https://doi.org/10.1038/s41593-022-01102-9
[12] https://doi.org/10.1016/j.cub.2021.09.041
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P211: Modeling Burst Variability in Beta-Gamma Oscillations Through Layer-Specific Inhibition
Monday July 7, 2025 16:20 - 18:20 CEST
P211 Modeling Burst Variability in Beta-Gamma Oscillations Through Layer-Specific Inhibition

Manoj Kumar Nandi*1,2 , Farzin Tahvili1,2, Clément Gossi-Denjean1,2, Emmanuel Procyk1,2, Charlie Wilson1,2, Matteo di Volo1,2


1Université Claude Bernard Lyon 1, Lyon, Rhône-Alpes, France
2INSERM U1208 Institut Cellule Souche et Cerveau, Bron, France

*Email: manoj.phy09@gmail.com, manoj-kumar.nandi@univ-lyon1.fr

Introduction:Cognitive functions rely on collective neuronal oscillations, captured by EEG/LFP. Beta (13-30 Hz) and gamma (30-100 Hz) oscillations are linked to cognition [1]. These oscillations occur in short bursts with variable frequencies, challenging trial averages and simplified models. In Ref. [2] we showed spiking and neural mass models reproduce gamma bursts but not variability. In a recent study using the adaptive exponential integrate-and-fire (AdEx) model with different percentages of somatostatin (SOM) and parvalbumin (PV), we have shown that SOM/PV density affects oscillation frequencies [3]. Experimental study shows, PV/SOM variability exists across layers [4]. Using AdEx, we model layer-specific inhibitory variability to explain burst frequency/power variability.

Methods:We analyze experimental LFP data using time-frequency spectrogram analysis to identify bursts, defined as oscillatory events lasting at least two cycles with power exceeding six times the median at that frequency. We extract burst features, including peak frequency, peak power, mean power, duration, frequency span, time of peak power, and burst size. Machine learning methods are applied to assess how these features relate to cognitive processes. We use a computational spiking network model based on the adaptive exponential integrate-and-fire (AdEx) model, incorporating layer-specific variability in inhibitory populations. This allows us to simulate burst dynamics observed in experimental data and explore how different inhibitory neuron densities influence oscillatory behavior.
Results:From the experimental signal, we first calculate the averaged beta and gamma power in the lateral prefrontal cortex (LPFC) across layers. As shown by Ref. [5], we also observed a crossover of powers across layers, where beta power dominates in deep layers, and gamma power dominates in superficial layers. The extracted burst power follows this trend, validating the burst extraction process. Using our model, we replicate this behavior, demonstrating the role of varying inhibitory neuron densities in different cortical layers.
Discussion:Our model can exhibit burst dynamics across beta to gamma bands as observed in experimental data. Introducing distinct inhibitory populations (SOM, PV) predicts a cortical hierarchy where increased SOM/PV densities lower oscillation frequencies. Layer-wise modeling reveals burst-like features resembling experimental data. These findings highlight the importance of inhibitory diversity in shaping oscillatory dynamics and suggest that layer-specific variability plays a key role in modulating neural activity across frequency bands.





Acknowledgements
This work is supported by the French Ministry of Higher Education (Ministére de l’Enseignement Supérieur) and the project LABEX CORTEX (ANR-11-LABX-0042) of Université Claude Bernard Lyon 1 operated by the ANR.
References
[1]https://doi.org/10.1016/j.neuron.2016.02.028
[2]https://doi.org/10.3389/fncom.2024.1422159
[3]https://doi.org/10.1101/2025.02.23.639719
[4]https://doi.org/10.1038/nn.3446
[5]https://doi.org/10.1038/s41593-023-01554-7


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P212: Delayed feedback and the precision of a neural oscillator in weakly electric fish
Monday July 7, 2025 16:20 - 18:20 CEST
P212 Delayed feedback and the precision of a neural oscillator in weakly electric fish


Parisa Nazemi*1,2, John Lewis1,2

¹ Department of Biology, University of Ottawa, Ottawa, Canada
² Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada

*Email: pnaze017@uottawa.ca


Introduction


Precision and reliability of neural oscillations are critical for many brain functions. Among all known biological oscillators, the electric organ discharge (EOD) in wave-type electric fish is the most precise, with sub-microsecond variations in cycle periods and a coefficient of variation of CV ~ 10⁻⁴[1]. The timing of the EOD is set by a medullary pacemaker network comprising 150 neurons with weak electrical coupling. How this pacemaker network achieves such high precision is not clear. One hypothesis is that pacemaker activity is regularized by electrical feedback from the EOD itself.
Methods
To investigate this, we use a computational model of a pacemaker neuron [2] with a delayed auto-feedback current stimulus. The stimulus waveform was chosen to mimic the electric field effects of the EOD. We also use a simple pulse stimulus for comparison.
Results
Our results show that feedback either increases or decreases the CV of the period, depending on the phase of delay:some delays led to low CV (regular oscillations) and others resulted in high CV (variable oscillations), corresponded to distinct regions of the phase response curve (PRC), witha clear relationship between CV and PRC slope (Fig. 1). Specifically, phases associated with the lowest CV (φ_L) and highest CV (φ_H) are near the points where the PRC crosses 1 with positive and negative slopes, respectively.We also tested other neural models [3, 4] with different PRCs and found that, as long as the PRC was type II, the results were similar.
Discussion

These findings provide insights into how time-delayed feedback influence the regularity and sensitivity of neural oscillations. The positive slope of the PRC suggests greater stability and promotes regularity for repeated fixed-delay stimulation. This mechanism could explain how the pacemaker network in weakly electric fish maintains exceptional regularity. More broadly, our findings suggest that feedback-driven stabilization may be a general principle for ensuring precise timing in biological oscillators.



Figure 1. Figure 1. Relationship between the coefficient of variation (CV) of periods with the phase response curve (PRC). A: normalized CV of periods. B: PRC. φ_L and φ_H mark the intersections of the PRC with the baseline at 1, where the slopes are positive and negative, respectively.
Acknowledgements
Supported by an NSERC Discovery Grant to Dr. John Lewis
References

1.https://doi.org/10.1073/pnas.95.8.4684
2.https://doi.org/10.1038/s41598-020-73566-3
3. https://doi.org/10.1523/JNEUROSCI.2715-06.2007
4. https://doi.org/10.7551/mitpress/2526.001.0001


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P213: logLIRA: a novel algorithm for intracortical microstimulation artifacts suppression
Monday July 7, 2025 16:20 - 18:20 CEST
P213 logLIRA: a novel algorithm for intracortical microstimulation artifacts suppression

Francesco Negri*1, David J. Guggenmos2, Federico Barban1,3

1Department of Informatics, Bioengineering, Robotics, System Engineering (DIBRIS), University of Genova, Genova, Italy

2Department of Rehabilitation Medicine and the Landon Center on Aging, University of Kansas Medical Center, Kansas City, KS, United States

3IRCSS Ospedale Policlinico San Martino, Genova, Italy


*Email: francesco.negri@edu.unige.it


Introduction
Intracortical microstimulation is a key tool to study neuropathologies and ultimately develop novel therapies [1, 2]. The analysis of short-latency evoked activity is essential to understand cortical reorganization driven by targeted electrical pulses [1, 3]. However, large voltage fluctuations known as stimulation artifacts hinder recording and analysis of neural response [4-6]. Existing rejection methods struggle with high spatially and temporally variable stimulus artifacts or rely on restrictive assumptions (e.g., absence of signal saturation) [5-8]. We propose a novel algorithm using piece-wise linear interpolation of logarithmically distributed points, alongside a framework to generate a semisynthetic dataset for benchmarking.

Methods
Our method, logLIRA, begins with a 1 ms blanking interval, dynamically extended to the end of signal saturation if present. Interpolation points are then sampled logarithmically, ensuring denser sampling where the signal changes rapidly. Piecewise linear interpolation estimates the artifact which is later subtracted. Possibly remaining secondary artifacts are mitigated by clustering the first 2 ms of recovered signals across trials, averaging and subtracting highly time-locked components. Finally, trial discontinuities are adjusted, and the same spike detection is applied to both ground truth and cleaned data for comparison.



Results
We evaluated logLIRA against three stimulus artifact rejection algorithms (dynamic averaging [5], global polynomial fitting [10], and SALPA [4]) using a semisynthetic dataset as ground truth. Root-mean-square error and cross-correlation at zero lag were calculated for varying mean firing and artifact rates. SALPA and logLIRA outperformed their competitors, excelling in both metrics (Fig. 1A). Notably, logLIRA significantly reduced the blanking interval duration (Fig. 1B), enabling better recovery of short-latency evoked responses while controlling secondary artifacts and thus false positives. Though not fully evident in the semisynthetic dataset lacking direct stimulus-spike correlation, this advantage is obvious in real data (Fig. 1C).



Discussion
With this work we introduced a reliable and effective method for the rejection of stimulus artifacts, highlighting the importance of handling secondary artifacts emerging from a reduced blanking interval or poor suppression due to numerous factors, including signal saturation. A trustworthy recovery of short-latency evoked activity is poised to greatly benefit neuroscientific research: logLIRA could improve the estimation of mesoscale effective connectivity by means of SEEC method [10], aiding in the understanding of cortex stimulation-driven functional reorganization [1, 3], and eventually enhancing the effectiveness of neuroprosthetic systems aimed at treating neuropathologies, improving the life quality of millions of patients [1-3, 11].






Figure 1. Performance comparison of stimulus artifact rejection algorithms on both semisynthetic and real data. A. Cross-correlation at zero lag for different values of mean artifact rate. B. Blanking intervals distribution for logLIRA and SALPA in the benchmark dataset. C. Example of recovered short-latency evoked activity from a real signal. The red vertical bars depict the 1 ms blanking interval.
Acknowledgements
Work supported by #NextGenerationEU (NGEU) and funded by the Italian Ministry of University and Research (MUR), National Recovery and Resilience Plan (PNRR), project MNESYS (PE0000006) - (DN. 1553 11.10.2022).
References
1.https://doi.org/10.1073/pnas.1316885110
2.https://doi.org/10.3389/fnins.2024.1363128
3.https://doi.org/10.1038/nature05226
4.https://doi.org/10.1016/S0165-0270(02)00149-8
5.https://doi.org/10.1016/j.jneumeth.2010.06.005
6.https://doi.org/10.1016/j.jneumeth.2024.110169
7.https://doi.org/10.1371/journal.pcbi.1005842
8.https://doi.org/10.1088/1741-2552/aaa365
9.https://doi.org/10.1088/1741-2552/ab7a4f
10.https://doi.org/10.1016/j.jneumeth.2022.109767

11.https://doi.org/10.3390/brainsci12111578
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P214: A Canonical Microcircuit of Predictive Coding Under Efficient Coding Principles
Monday July 7, 2025 16:20 - 18:20 CEST
P214 A Canonical Microcircuit of Predictive Coding Under Efficient Coding Principles

Elnaz Nemati*1, Catherine E. Davey1, Hamish Meffin1,2, Anthony N. Burkitt1,2

1Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia.
2Graeme Clark Institute, The University of Melbourne, Victoria, Australia.

*Email: fnemati@student.unimelb.edu.au

Introduction

Predictive coding describes how the brain integrates sensory inputs with expectations by minimizing expectation errors [1]. Studies show increased neural activity in cortical L2/3 during sensory mismatches [2], offering insights into disorders like autism [3] and perceptual phenomena such as visual illusions [4]. Canonical microcircuit models [5, 6] advance understanding but often overlook spiking dynamics, detailed inhibitory mechanisms, and Dale’s law adherence. They also neglect the distinct roles of cortical layers, especially L4 and L5/6 [7]. The Deneve framework [8] provides another perspective, modeling neurons as decoders where spikes are triggered if the membrane potentials, representing reconstruction errors, exceeds a threshold.

Methods
This study extends Deneve’s predictive coding framework [7] by assigning Gabor receptive fields to layer 4 neurons, creating a V1-like biologically inspired feature extractor. It introduces two-compartment neurons in layer 2/3 for prediction error signaling within a balanced E/I network. Our hierarchical model mirrors canonical circuits, using spiking neurons and simplified inhibitory populations: Parvalbumin (PV) and Somatostatin (SOM) inspired by Hertäg and Clopath [8]. Layer 5/6 contains similar neurons, generating predictions balanced by these populations (Fig.1a,b). Employing spiking neurons with Leaky Integrate-and-Fire dynamics, the model processes whitened images in ON/OFF channels, as in experimentally observed LGN responses.
Results
The model successfully results in L4 neurons displaying balance (Fig.1c), and orientation and phase selectivity (Fig.1d,e), thereby demonstrating biologically realistic V1 feature extraction. Layer 2/3 neurons robustly signal prediction errors across matched (FF=FB), mismatched (FF≠FB), feedforward-only (FF>FB), and feedback-only (FB>FF) conditions. Neuronal responses matched experimental evidence, where matched inputs minimized activity, while mismatched inputs elicited strong prediction-error signaling (Fig.1d). Critically, layer 5/6 neurons effectively integrated prediction errors from layer 2/3, significantly reducing sensory reconstruction errors and validating their predictive coding function.
Discussion
The model proposes that predictive coding effectively described cortical function through specific feedback interactions within canonical cortical circuits. It highlights the essential roles played by distinct neuronal compartments and inhibitory inter-neuron populations, specifically PV and SOM neurons, in modulating the balance. The close alignment of theoretical predictions with experimental observations supports the model's validity and enhances our understanding of cortical dynamics. Additionally, the model provides a robust foundation for future research in perceptual neuroscience, the development of neuromorphic systems, and the exploration of clinical interventions for disorders involving disrupted predictive coding mechanisms.




Figure 1. Fig 1: (a) Predictive coding microcircuit representation. (b) Detailed circuitry within each layer and connectivity. (c) Display of excitatory, inhibitory, and net currents showing balanced currents (d) Orientation Bias Index and (e) Phase Bias Index of Layer 4 excitatory populations. (f) Spike responses in Layer 2/3 to various feedforward and feedback inputs.
Acknowledgements
-
References
1.https://doi.org/10.1038/4580
2.https://doi.org/10.1016/j.neuron.2020.09.024
3.https://doi.org/10.1152/jn.00543.2015
4.https://doi.org/10.1016/j.neunet.2021.08.024
5.https://doi.org/10.1016/j.neuron.2012.10.038
6.https://doi.org/10.1016/j.neuron.2018.10.003
7.https://doi.org/10.1073/pnas.2115699119

8.https://doi.org/10.1038/nn.4243
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P215: Superior Temporal Interference Stimulation Targeting using Surrogate-based Multi-Goal Optimization
Monday July 7, 2025 16:20 - 18:20 CEST
P215 Superior Temporal Interference Stimulation Targeting using Surrogate-based Multi-Goal Optimization

Esra Neufeld*1, Cedric Bujard1, Melanie Steiner1, Fariba Karimi1,2, Niels Kuster1,2
1IT’IS Foundation, Zürich, Switzerland

2Swiss Federal Institute of Technology (ETH Zurich), Zürich, Switzerland

*Email: neufeld@itis.swiss

Introduction

Temporal Interference (TI) stimulation, an innovative formof transcranial electrical stimulation [1], uses multiple kHz currents with frequency offsets in the brain’s physiological range to steerably and selectivelystimulate deep targets. However, the complex and heterogeneous head environment, along with important inter-subject variability, make it challenging toidentify suitable stimulation parameters. An easy-to-use TI Planning (TIP)application was published [2] to facilitate study design and stimulation personalization. However, due to computational limitations, brute-force explorationof the full parameter space was not feasible, requiring users to impose pre-constrains. This oftenleads to suboptimal settings and making the tool lessaccessible for beginners.

Methods
TIP generates detailed head models from T1-weighted MRI data [3], co-registers the ICBM152 atlas [4] for target region identification, assigns DTI-based anisotropic conductivity maps [5], places electrodes according to the 10-10 system, and performs EM simulations to establish a full E-field basis. Surrogate based optimization (SBO) [6] combines an iteratively-refined Gaussian-process (GP) surrogate and a multi-objective genetic algorithm (MOGA) [7] to identify the front of Pareto-optimal conditions (electrode locations and currents), with regard to the goals of 1) maximizing stimulation strength and 2) selectivity, while 3) avoiding collateral stimulation.


Results
Based on the identified Pareto front, users can interactively weightthe three conflicting goals and compare configurations with comparable perfor-mances based on quantified quality metrics and visualized distributions. Theiterative SBO approachdramatically minimizes the number of full evaluationsrequired to predict the performance metrics (<100 instead of millions), enablingcomprehensive exploration of high-dimensional parameter spaces(5n − 1, n ≥ 2: number of channels).
Discussion
A fully automatic, online accessible tool for personalized TI stimulation planning has been established that leverages AI and image-based simulations. By introducing hybridized, iterative surrogate modeling and MOGA, systematic, comprehensive, and computationally tractable optimization in high-dimensional parameter spaces is achieved and interactive weighting of conflicting objectives becomes possible. The comprehensive search reduces the level of required user expertise, removes arbitrariness, and ensures identification of optimal conditions. The method readily generalizes to non-classic forms of multi-channel TI.





Acknowledgements
---
References
[1]https://doi.org/10.1016/j.cell.2017.05.024
[2] https://tip.itis.swiss
[3]https://doi.org/10.1088/1741-2552/adb88f
[4]https://doi.org/10.1098/rstb.2001.0915
[5]https://doi.org/10.1073/pnas.171473898
[6]https://doi.org/10.1007/978-3-642-20859-1_3
[7] https://doi.org/10.1007/978-981-19-8851-6_31-1
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P216: Modeling spreading depolarization in control and ischemic neocortical microcircuits using immunostained identify capillaries
Monday July 7, 2025 16:20 - 18:20 CEST
P216 Modeling spreading depolarization in control and ischemic neocortical microcircuits using immunostained identify capillaries

Adam JH Newton*1,Craig Kelley2,Siyan Guo3, Joy Wang3, Sydney Zink4, Marcello DiStasio4,5, Robert A McDougal3,5,6,7, William W Lytton1,8,9,10

Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, New York

Department of Biomedical Engineering, Columbia University, New York, NY.

Department of Biostatistics, Yale University, New Haven, CT, United States

Department of Pathology, Yale School of Medicine, New Haven, CT, United States

Wu Tsai Institute, Yale University, New Haven, CT, United States

Department of Biomedical Informatics and Data Science, Yale University, New Haven, CT, United States

Program in Computational Biology and Biomedical Informatics, Yale University, New Haven, CT, United States

Department of Neurology, SUNY Downstate Health Sciences University, Brooklyn, New York

Department of Neurology, Kings County Hospital Center, Brooklyn, New York

The Robert F. Furchgott Center for Neural and Behavioral Science, Brooklyn, New York
*Email: adam.newton@neurosim.downstate.edu








Introduction
Brain tissue requires a lot of energy to support the energy-intensive activity of neural information processing, particularly restoring ion homeostasis following action potentials. This high demand for energy leaves the system vulnerable to failures in homeostasis, such as spreading depolarization (SD). SDs are a wave of prolonged depolarizations preceded by a brief period of hyperexcitability that propagates through grey matter at 1-9mm/min [1]. Multiple neurological disorders can lead to SD, including migraine aura, epilepsy, traumatic brain injury, and ischemic stroke.



Methods
We modeled point neurons with Hodgkin-Huxley style channels augmented with homeostatic mechanisms, including Na+/K+-ATPase, NKCC1, KCC2, and dynamic volume changes. Astrocytic buffering was modeled as a field of oxygen-dependent and independent clearance of extracellular K+. Connectivity was based on prior models and the weights and the distribution of external drive were scaled to account for differences between conductance-based Integrate-and-Fire models[2,3]. A 2.0 x 2.3 cm cross-section of the human cortical plate in V1 with immunostaining for CD34, was used to determine the locations of 918 capillaries (mean capillary density: 199.6/cm2; mean±SD capillary cross-sectional area: 16.7±11.9μm2).


Results
We used NEURON/RxD/NetPyNE to simulate13,000 neurons representing ~1 mm3of mouse cortex (layers 2-6), monitoring the concentration of Na+, K+, Cl-, and oxygen, both intra- and extra-cellularly[4–7]. Spreading depolarization could be reliably triggered in each layer by elevating extracellular K+with differences in propagation speed between layers.


Discussion
We use this model to explore the hypotheses that vascular heterogeneity will lead to areas where neurons and astrocytes are well-supplied with oxygen and can better maintain normal activity following insult (increased extracellular K+or reduced perfusion). We also examined the mechanisms that could give rise to greater susceptibility and propagation speeds in the superficial layers compared with the deep cortical layers[8].




Acknowledgements
This research was funded by the National Institute of Mental Health, National Institutes of Health, grant number R01 MH086638, with HPC time fromNIH S10 award, 1S10OD032417-01, and the Yale Center for Research Computing McClearly cluster.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
References
1.https://doi.org/10.1007/s12028-021-01429-4
2.https://doi.org/10.1093/cercor/bhs3586
3.https://doi.org/10.1162/neco_a_01400
4.https://doi.org/10.3389/fninf.2022.884046
5.https://doi.org/10.3389/fninf.2018.00041
6.https://doi.org/10.7554/eLife.44494
7.https://doi.org/10.1523/ENEURO.0082-22.2022
8.https://doi.org/10.1177/0271678X16659496
Speakers
avatar for Robert McDougal

Robert McDougal

Assistant Professor, Yale University, USA
Looking for a postgrad or postdoc position implementing simulation methods? I'm hiring.I'm an Assistant Professor in the Health Informatics division of Biostatistics, and a developer for NEURON and ModelDB. Computationally and mathematically, I'm interested in dynamical systems modeling... Read More →
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P217: The Generalized Activating Function: Accelerating Axonal Dynamics Modeling for Spinal Cord Stimulation Optimization
Monday July 7, 2025 16:20 - 18:20 CEST
P217 The Generalized Activating Function: Accelerating Axonal Dynamics Modeling for Spinal Cord Stimulation Optimization

Javier García Ordóñez*1,2, Taylor Newton1, Abdallah Alashqar3,4, Andreas Rowald3,4, Esra Neufeld1, & Niels Kuster1,5

1 IT’IS Foundation, Zürich, Switzerland
2 Zürich MedTech AG, Zürich, Switzerland
3 Department of Medical Informatics, Biometry and Epidemiology, Friedrich-Alexander-Universität Erlangen-Nürenberg, Erlangen, Germany
4 Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürenberg, Erlangen, Germany
5 Swiss Federal Institute of Technology (ETH Zurich), Zürich, Switzerland

*Email: ordonez@itis.swiss

Introduction

The classical Activating Function (AF) provides a fast, linear estimator for membrane polarization as a predictor for stimulation by extracellular electric potential exposure [1]. While computationally efficient, the classical AF fails to account for membrane leakage currents, diffusive interactions between adjacent axonal segments, complex fiber models (multi-cable with periaxonal and paranodal compartments), and the influence of stimulation waveform, limiting its accuracy and usefulness in complex neurostimulation scenarios​.

Methods
The Generalized Activating Function (GAF) is a biophysics-based predictor that overcomes these limitations while preserving computational efficiency. The GAF extends the classical framework by convolving the extracellular potential with a Green's function kernel to account for the dynamics of membrane polarization, including axial currents and membrane leakage​. A fast Fourier transform is used for the convolutions, producing spike predictions more than 1000× faster than conventional compartmental modeling​. The GAF’s formulation accurately predicts dynamic responses in complex fiber models, such as the McIntyre-Richardson-Grill myelinated fiber model [2].
Results
We first verified the GAF by reproducing benchmark experimental and computational data (e.g., strength–duration curves and diameter–dependent rheobase values for different fiber types). Next, we applied the GAF to a clinically validated, realistic model of spinal cord stimulation (SCS)[3]. The GAF’s spike predictions matched those of full electrophysiological simulations, with compute times reduced from hours to seconds​. Finally, we leveraged the GAF’s speed and efficiency to explore the design of superior stimulation waveforms and electrode configurations that enhance the selectivity and energy efficiency of SCS. GAF-guided pulse shape optimization discovered charge-balanced waveforms that double recruitment efficacy or reduced power consumption five-fold relative to commonly applied stimulation waveforms​.
Discussion
These results demonstrate that the GAF dramatically accelerates neurostimulation modeling without significant loss of accuracy, thereby facilitating large-scale explorations of stimulation parameters and the identification of personalized neuromodulation strategies. By bridging the gap between computational modeling and clinical practice, the GAF paves the way for optimized, patient-specific neurostimulation therapies.





Acknowledgements
No acknowledgements.
References
● https://doi.org/10.1152/jn.00353.2001
● https://doi.org/10.1109/TBME.1986.325670


● https://doi.org/10.1038/s41591-021-01663-5




Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P218: Using AI Technologies To Boost the Simulation and Personalization of Brain Network Models
Monday July 7, 2025 16:20 - 18:20 CEST
P218 Using AI Technologies To Boost the Simulation and Personalization of Brain Network Models

Alessandro Fasse1, Chiara Billi*1, Taylor Newton1, Esra Neufeld1

1 Foundation for Research on Information Technologies in Society (IT’IS), Zurich, Switzerland

*Email:billi@zmt.swiss

Introduction

Neural mass models (NMMs) approximate the collective dynamics of neuronal populations through aggregated state variables, facilitating large-scale brain network simulations and the calculation ofin silicoelectrophysiological signals. Despite their utility in exploring emergent network phenomena across brain states, scaling these models to whole-brain simulations imposes steep computational costs. To address this, we leveraged the inherent parallelism of NMM computations and their AI-like computational structure to develop a GPU-accelerated framework based on computation graphs.

Methods
Our framework simulates networks of Jansen-Rit (JR)-type NMMs [1] in both region- and surface-based configurations, while supporting weight matrix-based inter-region connectivity, local coupling definition, and stochastic integration. All JR model state variables are consolidated into a single PyTorch [2] tensor, enabling efficient batch processing across multiple GPUs with mixed-precision (16-, 32-, and 64-bit) support. Structuring the entire computation as a graph enables access to automatic differentiation, allowing gradient tracking throughout the simulation and gradient-based optimization of model parameters.
Results
We compared our framework’s performance to The Virtual Brain (TVB) [3], a widely used library for whole-brain NMM simulations. For modeling 10 seconds of stochastic activity in a 20k-node network, our method completed the task in 20 seconds, compared to 18 minutes with TVB - a ~55-fold speedup. The implementation was verified, e.g., by reproducing results from [4] on sharp transitions in network synchronization (measured by the Kuramoto parameter) as a function of the global coupling coefficient.
Discussion
Our findings highlight the feasibility of large-scale, high-fidelity neural mass simulations with runtimes suitable for online or iterative workflows. By leveraging computation graphs for parallel processing and automatic differentiation, the framework opens avenues for gradient-based parameter fitting and real-time state estimation. The replication of established emergent phenomena supports the model’s validity and suggests broader applications including pathological and adaptive networks. Future work will extend these capabilities to other NMM model types and explore integration with multi-scale brain models.



Acknowledgements
No acknowledgements.
References
● https://doi.org/10.1016/j.neuroimage.2023.119938
● "Automatic differentiation in PyTorch", A. Paska, S. Gross, S. Chintala, G. Chanan, Y. Edward, Z. DeVito, Z. Lin, A. Desmaison, L. Antigua and A. Lerer, 2017
● https://doi.org/10.3389/fninf.2013.00010
● https://doi.org/10.1371/journal.pone.0292910


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P219: Training the Drosophila Connectome as an Autoencoder - Reproduction of Direction Selectivity in T4/T5 Cells -
Monday July 7, 2025 16:20 - 18:20 CEST
P219 Training the Drosophila Connectome as an Autoencoder - Reproduction of Direction Selectivity in T4/T5 Cells -

Naoya Nishiura*1,Keisuke Toyoda1,Masataka Watanabe1

1The University of Tokyo, Tokyo, Japan

*Email: naoya-nishiura@g.ecc.u-tokyo.ac.jp

Introduction

Connectome-based modeling provides a powerful framework to investigate the inner workings of the biological brain, but its supervised training often relies on external labels unavailable in real environments [1]. In the Drosophila visual system, prior work employed a connectome-constrained network with optical flow as teaching signals [2], whereas, neural circuits in Drosophila does not have access to such labels. To address this limitation, we adopted an autoencoder-like strategy: a network reconstructing R1–8 neural responses. We confirmed development of direction selectivity T4 and T5 cells, leveraging only the brain’s innate connectivity [3].

Methods
We adopted the non-trained circuitry of the deep mechanistic network (DMN) of the Drosophila optic lobe [2], while removing the artificial network receiving optical flow as the teaching signal. Instead, we introduced a set of “phantom” R1–8 neurons that only receives feedback from the optical lobe. During training, the model’s sensory input was compared to the outputs of these phantom neurons via an L2 reconstruction loss. We preserved the native connectome structure, including the hexagonal columnar organization. Standard gradient-based optimization was used to update neuronal and synaptic parameters.
Results
After training, the DMN produced retinal-like activity patterns in its intermediate layers, effectively mapping spatial shadows across the hexagonal retinotopic array [4]. Notably, T4 neurons acquired direction-selective responses comparable to those observed in supervised settings, though preferred directions were not identical to biological measurements [2]. These results demonstrate that training of connectome-based autoencoder architecture leads to motion-selective T4 and T5 neurons, reproducing the functioning drosophila optical lobe.
Discussion
Our findings show that biologically plausible, connectome-constrained networks can self-organize fundamental visual computations through an autoencoder framework rather than providing explicit teaching signals [2]. By exploiting Drosophila’s neural connectivity and reconstructing phantom R1–8 neurons, the model reveals how intrinsic circuit architecture may lead to acquirement of direction selective cells. Our results illustrate the potential of training connectome networks under a biological plausible architecture, namely, auto-encoders, which may lead to near-ground-truth neural dynamics.




Acknowledgements
This work has been supported by the Mohammed bin Salman Center for Future Science and Technology for Saudi-Japan Vision 2030 at The University of Tokyo (MbSC2030) and JSPS KAKENHI Grant Number 23K25257.
References
[1]https://doi.org/10.3389/fncom.2016.00094
[2]https://doi.org/10.1038/s41586-024-07939-3
[3]https://doi.org/10.7554/eLife.40025
[4]https://doi.org/10.1073/pnas.1509820112

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P220: Ullam accusamus delectus et dolorum non ea libero reprehenderit
Monday July 7, 2025 16:20 - 18:20 CEST
P220 Ullam accusamus delectus et dolorum non ea libero reprehenderit

John A. Doe*1, Jane B. Smith2, Carlos M. Gonzalez1,3

1Department of Neuroscience, Example University, City, Country
2Institute for Brain Research, Another University, City, Country
3Center for Cognitive Science, Yet Another Institution, City, Country

*Email: john@univmail.com

Introduction


Methods

Results

Discussion





AcknowledgementsMaxime dolor blandit.
ReferencesQuos voluptatem magn.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P221: Psilocybin Accelerates EEG Microstate Transitions and Elevates Approximate Entropy
Monday July 7, 2025 16:20 - 18:20 CEST
P221 Psilocybin Accelerates EEG Microstate Transitions and Elevates Approximate Entropy

Filip Novický1*, Adeel Razi2,3,4, Fleur Zeldenrust1


1Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, Nijmegen, 6525 AJ, Netherlands
2Turner Institute for Brain and Mental Health, Monash University, Clayton, 3168, Victoria, Australia
3Wellcome Centre for Human Neuroimaging, University College London, London, WC1N 3AR, UK
4CIFAR Azrieli Global Scholars Program, CIFAR, Toronto, Canada


*Email: filip.novicky@donders.ru.nl
Introduction

While psilocybin’s therapeutic potential is well-documented, its effects on brain function remain incompletely understood. The relaxed beliefs under psychedelics (REBUS) theory proposes that psychedelics weaken the brain's rigid thought patterns by reducing the influence of existing mental frameworks and hence increasing neural entropy [1]. This study investigated how psilocybin affects the brain’s spatiotemporal dynamics at the millisecond scale using EEG. In addition, this study examined whether its effects are modulated by mindfulness training and different cognitive states.


Methods
We analyzed EEG data from 63 participants (33 with mindfulness training, 30 without) during four conditions: video watching, resting state, meditation, and music listening, both before and after the consumption of psilocybin (19mg). Using EEG microstate analysis, we examined the temporal characteristics of four canonical brain states [2]. We complemented this with approximate entropy analysis to quantify signal complexity [3]. Statistical comparisons were performed across conditions, groups, and drug states with FDR correction.


Results
Psilocybin significantly altered brain dynamics during the eyes-closed conditions, increasing microstates’ occurrences rates while decreasing their duration. Mindfulness training showed no significant effect on these changes. Approximate entropy analysis revealed increased signal complexity, particularly during the eyes-closed states. While brain activity patterns primarily differed between eyes-open and eyes-closed states, psilocybin notably diminished the typical neural activity differences between passive rest and attentional states (meditation and music).


Discussion
Our findings support the REBUS theory’s prediction of an increased entropy of the neuronal activity under psychedelics, particularly during the eyes-closed states. The combination of increased microstate transition rates and elevated signal complexity suggests psilocybin creates a more dynamic and less constrained brain state, in agreement with previous studies [4]. Thus, the results of this study suggests that psychedelics can temporarily alter the brain’s typical processing patterns.





Acknowledgements
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 953327. This work benefited from the guidance of Adeel Razi's lab regarding the PsiConnect dataset.
References
1. https://doi.org/10.1124/pr.118.017160
2.https://doi.org/10.1016/j.neuroimage.2017.11.062
3.https://doi.org/10.1073/pnas.88.6.2297
4. https://doi.org/10.1038/srep46421
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P222: Computational framework for analyzing oscillatory patterns in neural circuit models
Monday July 7, 2025 16:20 - 18:20 CEST
P222 Computational framework for analyzing oscillatory patterns in neural circuit models

Nikita Novikov1*, Chelsea Ekwughalu1,2, Samuel Neymotin1,3, Salvador Dura-Bernal1,4

1Center for Biomedical Imaging & Neuromodulation, The Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA
2Department of Physics, Barnard College, New York, NY, USA
3Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, USA
4Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA

*nikknovikov@gmail.com
Introduction

Neural oscillations coordinate brain activity, with abnormal patterns linked to neurological disorders. Understanding their emergence from biological parameters is crucial for effective intervention. While biophysically detailed models provide mechanistic insight, their complexity makes direct analysis computationally expensive and mathematically intractable. To address this, we present a computational framework for the systematic exploration of key parameters governing oscillatory dynamics in large-scale neural networks.
Methods
Our approach relies on the eigenmode decomposition of frequency-dependent transfer matrices, originally proposed in [1] for LIF neurons. In contrast to [1], we do not derive transfer matrices analytically, but instead, we developed a toolbox for their numerical estimation, extending the method to arbitrary models. The estimation is done by automatic construction and simulation of surrogate models, where a single population remains intact, others are replaced by equivalent spike generators, and a sinusoidal signal is added to the probed input. The toolbox is built on top of the NetPyNE framework [2] and supports high-performance parallel simulations.
Results
We validated our approach on a simplified model of cortical layers 2/3 and 4, demonstrating that it accurately decomposes network activity into oscillatory modes and predicts the amplitudes and phases of the oscillations (Fig. 1A-C). Using the computed transfer matrices, we estimated the effects of synaptic weight perturbations by modifying the relevant transfer coefficients and analyzing the resulting eigenmodes, without needing full-model simulations. These predictions closely matched direct simulations of the perturbed model (Fig. 1D, E), confirming that our method reliably identifies key connections that shape oscillatory activity.
Discussion
We propose a framework for systematically exploring the relationship between biological parameters and emergent oscillations. Our tool estimates inter-population transfer coefficients through multiple independent simulations of simple surrogate models, a process well-suited for efficient parallelization. Once computed, these coefficients provide insight into the full model’s oscillatory modes and their sensitivity to parameter perturbations. Our results validate the approach, demonstrating its potential for analyzing neural circuits and informing future neurostimulation and pharmacological interventions.




Figure 1. Figure 1. A – power spectral densities (PSDs. B – eigenmode amplitudes. C – complex relations between L2e and other populations at 60 Hz; arrows – projections of the 1st mode onto populations; black – distribution of simulated instantaneous relations. D, E – effects of L2e->L2i weight perturbation on the 1st mode amplitude (D) and L2i PSD (E).
Acknowledgements
The work is supported by the grants: R01 MH134118-01, RF1NS133972-01, R01DC012947-06A1, R01DC019979, ARL Cooperative Agreement W911NF-22-2-0139, P50 MH109429
References
1. Bos, H., Diesmann, M., & Helias, M. (2016). Identifying Anatomical Origins of Coexisting Oscillations in the Cortical Microcircuit. PLOS Computational Biology, 12(10), e1005132. https://doi.org/10.1371/journal.pcbi.1005132
2. Dura-Bernal, S., Suter, B. A., Gleeson, P., Cantarelli, M., Quintana, A., Rodriguez, F., Kedziora, D. J., Chadderdon, G. L., Kerr, C. C., Neymotin, S. A., McDougal, R. A., Hines, M., Shepherd, G. M., & Lytton, W. W. (2019). NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife, 8, e44494. https://doi.org/10.7554/eLife.44494
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P223: Auto-adjoint method for Spiking Neural Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P223 Auto-adjoint method for Spiking Neural Networks

Thomas Nowotny*1, James C. Knight1

1Department of Neuroscience, Example University, City, Country

*Email: t.nowotny@sussex.ac.uk


Introduction
It is important for the success of neuromorphic computing and computational neuroscience to be able to efficiently train spiking neural networks (SNNs). In 2021, Wunderlich and Pehle published the Eventprop algorithm [1], which is based on the adjoint method for hybrid continuous- discrete systems [2]. Eventprop casts the backward pass, which calculates the gradient of a loss function over an SNN, into a hybrid continuous- discrete system of the same nature as the forward dynamics of the SNN. Therefore, Eventprop can be implemented efficiently on both existing SNN software simulators [3] and digital neuromorphic hardware [4].


Methods

Here, we present new work in which we take Eventprop to the next level. The original Eventprop algorithm [1] was derived explicitly for the case of leaky integrate-and-fire (LIF) neurons and “exponential” synapses. The adjoint method for hybrid systems is much more general [2], and [5] already presents a more general set of equations. Here, we choose a level of generality that allows us to derive the general adjoint equations in a form that is explicit enough that the sympy symbolic math Python package can automatically generate code to simulate the equations.

Results
We assume that the neurons of the SNN being trained have internal dynamics described by ordinary differential equations and that their spiking condition and their reset behaviour are described by functions of the neurons’ variables. Finally, we assume that the action caused by an incoming spike entails adding to a neuron variable. Under these general assumptions, we derived a backward pass for adjoint variables like in the original Eventprop and implemented it into our mlGeNN spike-based machine learning framework [6] using sympy. We observe that for leaky integrate-and-fire neurons and exponential synapses, the new framework has the same performance on popular benchmarks as the previous version of standard Eventprop.


Discussion

We have created a new version of mlGeNN that, based on the generalised Eventprop method presented here, allows researchers to rapidly train SNNs with virtually any neuron dynamics using gradient descent with exact gradients. This includes more complex dynamics, such as Hodgkin-Huxley conductance-based models, opening new avenues for injecting function into computational neuroscience models. This new capability is akin to the auto-diff functionality of PyTorch, which has been instrumental in the recent AI revolution.



Acknowledgements
This work was partially funded by the EPSRC, grants EP/V052241/1 and EP/S030964/1.
References
[1] Wunderlich, T. C., & Pehle, C. (2021). Scientific Reports, 11(1), 12829.
[2] Galán, S., Feehery, W. F., & Barton, P. I. (1999). Appl. Num. Math., 31(1), 17-47.
[3] Nowotny, T., Turner, J. P., & Knight, J. C. (2025). Neurom. Comput. Eng., 5(1), 014001.
[4] Gabriel, B., Timo, W., Mahmoud, A., Bernhard, V., Christian, M., & Hector, A. G. (2024). arXiv preprint arXiv:2412.15021.
[5] Pehle, C. G. (2021). Adjoint equations of spiking neural networks (Doctoral dissertation).
[6] Turner, J. P., Knight, J. C., Subramanian, A., & Nowotny, T. (2022). Neurom. Comput. Eng., 2(2), 024002.

Speakers
avatar for Thomas Nowotny

Thomas Nowotny

Professor of Informatics, University of Sussex
I do research in computational neuroscience and bio-inspired AI. More details are on my home page and institutional home page. I am also the current president of OCNS... Read More →
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -