Loading…
Saturday, July 5
 

08:00 CEST

Registration
Saturday July 5, 2025 08:00 - 19:00 CEST
Saturday July 5, 2025 08:00 - 19:00 CEST

09:00 CEST

10:10 CEST

Coffee break
Saturday July 5, 2025 10:10 - 10:40 CEST
Saturday July 5, 2025 10:10 - 10:40 CEST

10:40 CEST

12:00 CEST

Lunch break
Saturday July 5, 2025 12:00 - 13:00 CEST
Saturday July 5, 2025 12:00 - 13:00 CEST

13:00 CEST

14:30 CEST

Coffee break
Saturday July 5, 2025 14:30 - 15:00 CEST
Saturday July 5, 2025 14:30 - 15:00 CEST
Passi Perduti

15:00 CEST

16:00 CEST

Welcome and Keynote #1: Konrad Kording
Saturday July 5, 2025 16:00 - 17:20 CEST
Speakers
Saturday July 5, 2025 16:00 - 17:20 CEST
Auditorium - Plenary Room

18:30 CEST

Welcome reception
Saturday July 5, 2025 18:30 - 20:30 CEST
Saturday July 5, 2025 18:30 - 20:30 CEST
 
Sunday, July 6
 

08:30 CEST

Registration
Sunday July 6, 2025 08:30 - 19:00 CEST
Sunday July 6, 2025 08:30 - 19:00 CEST

09:00 CEST

Announcements and Keynote #2: Sara Solla
Sunday July 6, 2025 09:00 - 10:10 CEST
Speakers
Sunday July 6, 2025 09:00 - 10:10 CEST
Auditorium - Plenary Room

10:10 CEST

Coffee break
Sunday July 6, 2025 10:10 - 10:40 CEST
Sunday July 6, 2025 10:10 - 10:40 CEST

10:40 CEST

Oral session 1: The building blocks of AI
Sunday July 6, 2025 10:40 - 12:30 CEST
Sunday July 6, 2025 10:40 - 12:30 CEST
Auditorium - Plenary Room

10:41 CEST

FO1: Hearing Music: A Shared Geometry Governs the Trade-off Between Reliability and Complexity in the Neural Code
Sunday July 6, 2025 10:41 - 11:10 CEST
Hearing Music: A Shared Geometry Governs the Trade-off Between Reliability and Complexity in the Neural Code

Pauline G. Mouawad∗1,Shievanie Sabesan1, Alinka E. Greasley2, Nicholas A. Lesica1
1The Ear Institute, University College London, London, UK
2School of Music, University of Leeds, Leeds, UK


*Email: p.mouawad@ucl.ac.uk





Introduction
Music is central to human culture, shaping social bonds and emotional well-being. Its unique ability to connect sensory processing with reward, emotion, and statistical learning makes it an ideal tool for studying auditory perception [1]. Previous studies have explored neural responses to speech and to simple musical sounds [2, 3], but the neural coding of complex music remains unexplored. We addressed this gap by analyzing multi-unit activity (MUA) recorded from the inferior colliculus (IC) of normal-hearing (NH) and hearing-impaired (HI) gerbils in response to a range of music types at multiple sound levels. The music types included individual stems (vocals, drums, bass, and other) as well as mixtures in which the stems were combined.
Methods
Using coherence analysis, we assessed how reliably music is encoded in the IC across repeated presentations of stimuli and the degree to which individual stems are distorted when presented in a mixture. To explore neural activity patterns at the network level, we implemented a manifold analysis using PCA. This identified the signal manifold, the subspace where reliable musical information is embedded. To model neural transformations underlying music encoding, we developed a deep neural network (DNN) capable of generating MUA from sound, providing a framework for interpreting how the IC processes music. Finally, to assess the impact of hearing loss, we conducted a comparative analysis for NH and HI at equal sound and sensation levels.
Results
We identified strong nonlinear interactions between stems, affecting both the reliability and geometry of neural coding. The reliability of the responses and the dimensionality of the signal manifold varied widely across music types. With increasing musical complexity, the dimensionality of the signal manifold increased, however the reliability decreased. The leading modes in the signal manifold were reliable and shared across all music types, but as musical complexity increased, new neural modes emerged, though these were increasingly unreliable (Figure 1). Our DNN successfully synthesized MUA from music with high fidelity. After hearing loss, neural coding was strongly distorted at equal sound level, but these distortions were largely corrected at equal sensation level.
Discussion

Music processing in the early auditory pathway involves nonlinear interactions that shape the neural representation in complex ways. The signal manifold contains a fixed set of leading modes that are invariant across music types. As music becomes more complex the manifold is not reconfigured; instead, new, less reliable modes are added. These new modes reflect a fundamental trade-off between fidelity and complexity in the neural code. The fact that suitable amplification restores near-normal neural coding suggests that mild-to-moderate hearing loss primarily affects audibility rather than the brainstem’s capacity to process music.
Figure 1. Complexity and Reliability in the Latent Space
Acknowledgements
Funding for this work was provided by the UK Medical Research Council through grant MR/W019787/1.
References
1. Patrik N Juslin and Daniel Västfjäll. “Emotional responses to music: The need to consider
underlying mechanisms”.https://doi.org/10.1017/S0140525X08005293.
2. Vani G Rajendran et al. “Midbrain adaptation may set the stage for the perception of musical
beat”. In: Proceedings of the Royal Society B: Biological Sciences 284.1866 (2017), p. 20171455.
https://doi.org/10.1098/rspb.2017.1455.
3. Shievanie Sabesan et al. “Large-scale electrophysiology and deep learning reveal
distorted neural signal dynamics after hearing loss”. In: Elife 12 (2023), e85108.
https://doi.org/10.7554/eLife.85108.


Sunday July 6, 2025 10:41 - 11:10 CEST
Auditorium - Plenary Room

11:10 CEST

O1: Balancing Stability and Flexibility: Dynamical Signatures of Learning in In-Vitro Neuronal Networks
Sunday July 6, 2025 11:10 - 11:30 CEST
Balancing Stability and Flexibility: Dynamical Signatures of Learning in In-Vitro Neuronal Networks

Forough Habibollahi*1, Brett J. Kagan1

1Cortical Labs, Melbourne, Australia


*Email: forough@corticallabs.com

Introduction

CL1 is a novel system which bridges biological intelligence and adaptive neuronal traits by integrating in-vitro neuronal networks with in-silico computational elements using micro-electrode arrays (MEAs) [1]. These cultivated neuronal ensembles demonstrate self-organized, biological adaptive intelligence in dynamic gaming environments via closed-loop stimulation and concurrent recordings. While in-vitro neuronal networks are shown to achieve real-time adaptive learning, the underlying network dynamics enabling this learning remain under explored.


Methods
We investigated pairwise causal relationships between recorded channels using Granger causality analysis [2], reconstructing a connectivity network from statistically significant causal interactions. The most influential/influenced nodes were identified as those with highest outgoing/incoming connections. To explore dynamic properties, we reconstructed the phase space of the spiking time series from all recorded channels using state-space reconstruction [3]. Optimal embedding dimensions were determined by minimizing false nearest neighbors, while time delays were selected by detecting the first local minimum of mutual information across different delays. Recurrence plots were generated from the reconstructed phase spaces to analyze temporal patterns.
Results
We analyzed 45-minute spiking recordings at 25 kHz from 23 neuronal cultures, comprising 111 rest sessions and 133 gameplay sessions. Across both rest and gameplay conditions, we observed distinct dynamic patterns between “influential” and “influenced” nodes. Overall, the gameplay sessions exhibited higher recurrence (RR) and determinism (DET) compared to rest (Fig. 1.a). However, in both conditions, the “influenced” nodes displayed lower RR and more negative Lyapunov exponents—indicative of more ordered behavior that lies farther from the edge of chaos. In contrast, the most influential nodes showed higher RR, reflecting recurrent and cyclic dynamics, and had small negative Lyapunov exponents, consistent with behavior near the edge of chaos (Fig 1.b.).
Discussion
Our findings reveal a functional dichotomy in in-vitro neuronal networks. Influential channels exhibit cyclic behavior near the edge of chaos, marked by high RR and near-zero negative Lyapunov exponents, balancing order and chaos. These “near-chaotic” nodes drive network dynamics, enabling rapid influence and adaptability.

In contrast, influenced channels remain more ordered, with lower recurrence and more negative Lyapunov exponents, suggesting stable responsiveness.
This interplay between near-chaotic drivers and stable receivers enables neuronal cultures to balance robustness with adaptability. By defining how distinct dynamical states interact, our results shed light on coordinated neuronal activity and the role of near-chaotic dynamics in flexible behavior.



Figure 1. a) Comparison of dynamic metrics between rest and gameplay sessions. Bar plots show mean values (±SEM) for recurrence rate (RR), determinism (DET), laminarity (LAM), and Lyapunov exponent across all recorded electrodes. b) Dynamic properties of influential vs. influenced nodes across rest and gameplay conditions.
Acknowledgements
F.H. and B.J.K. are employees of Cortical Labs. B.J.K. is a shareholder of Cortical Labs. B.J.K. holds an interest in patents related to this publication.
References
[1]https://doi.org/10.1016/j.neuron.2022.09.001
[2]https://doi.org/10.2307/1912791
[3]https://doi.org/10.1007/BFb0091924
Sunday July 6, 2025 11:10 - 11:30 CEST
Auditorium - Plenary Room

11:30 CEST

O2: Representational drift as a correlate of memory consolidation
Sunday July 6, 2025 11:30 - 11:50 CEST
Representational drift as a correlate of memory consolidation

Denis Alevi*+1,2, Felix Lundt+1, Simone Ciceri1, Kristine Heiney1, Henning Sprekeler1,2,3

1Modelling of Cognitive Processes, Technische Universität Berlin, Berlin, Germany
2Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
3Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
+Equal contribution
*Email: denis.alevi@tu-berlin.de

Introduction

Neural representations – and their population geometry – often change over time despite stable behavior, a phenomenon termed representational drift [1-4]. It is debated if drift is driven by a random process or if it has a directed component, and if it serves a computational function [5]. Systems memory consolidation is a promising candidate [6], because it predicts a temporal reorganization of neural memory engrams. However, it remains unclear how classical theories of consolidation relate to the population-level view of drift and how apparently unstructured drift could be driven by a directed consolidation process.
Methods
We present a computational model for engram dynamics under memory consolidation and explore the resulting representational drift. Assuming that engram changes are driven by reactivations, the model displays recurrent neural network (RNN)-like dynamics, but evolves on long time scales of memory consolidation. This allows us to reinterpret common dynamical phenomena in RNNs in light of memory consolidation and relate them to experimentally observed drift. In simulation, we study how single cell tuning curves and the geometry of neural representations change over time, when not all neurons are observed and develop analytical results for the effect of subsampling, based on Green’s functions and random matrix theory.
Results
Our model redistributes memory engrams across neural populations while maintaining stable memory recall through null-space dynamics [2]. The model can display power-law forgetting without requiring a diversity of learning rates [7]. Low-rank dynamics induce selective consolidation and semantization. In line with experimental findings on representational drift, individual neurons exhibit diverse tuning changes: stability, gradual drift, and abrupt changes of preferred stimulus. Multi-day decoders [2] reveal invariant subspaces on the full population, but degrade quickly under subsampling. A theoretical analysis shows that the dynamics of subsampled populations can be predominantly driven by the unrecorded population, which generates seemingly noise-driven dynamics.

Discussion
Our phenomenological model of engram dynamics bridges the gap between the area-centered perspective of systems consolidation and the population-level perspective of representational drift. Our results show that despite systematic population dynamics, a recorded subset of the neural population can appear to have unstructured dynamics [2]. Recent evidence for stable geometric structure during representational drift in CA1 [7] is consistent with our model of RNN-like engram dynamics, and we hypothesize that unstable population geometry [3] could also be explained by subsampling. Overall, our model offers a functional interpretation of drift as a means to redistribute engrams for improved memory retention.



Acknowledgements
Kristine Heiney is funded by a Postdoctoral Research Fellowship from the Alexander von Humboldt Foundation.
References
[1]https://doi.org/10.1016/j.cell.2017.07.021
[2]https://doi.org/10.7554/eLife.51121
[3]https://doi.org/10.1038/s41586-021-03628-7
[4]https://doi.org/10.1007/s00422-021-00916-3
[5]https://doi.org/10.1016/j.conb.2022.102609
[6]https://doi.org/10.1371/journal.pcbi.1003146
[7]https://doi.org/10.1101/2025.02.04.636428
Speakers
Sunday July 6, 2025 11:30 - 11:50 CEST
Auditorium - Plenary Room

11:50 CEST

O3: Backpropagation through space, time and the brain
Sunday July 6, 2025 11:50 - 12:10 CEST
Backpropagation through space, time and the brain

Paul Haider*1, Benjamin Ellenberger1, Jakob Jordan1, Kevin Max1, Ismael Jaras1, Laura Kriener1, Federico Benitez1, Mihai A. Petrovici1

1Department of Physiology, University of Bern, Bern, Switzerland

*Email: paul.haider@unibe.ch
Introduction

Effective learning in the brain relies on the adaptation of individual synapses based on their relative contribution to solving a task. However, the challenge of spatio-temporal credit assignment in physical neuronal networks remains largely unsolved due to the biologically implausible assumptions of traditional backpropagation algorithms. This study aims to bridge this gap by proposing a novel framework that efficiently performs credit assignment in real-time, without violating spatio-temporal locality constraints, driven by the need for biological systems to learn continuously and interact with dynamic environments.

Methods
We introduce Generalized Latent Equilibrium (GLE), a computational framework for fully local spatio-temporal credit assignment in physical, dynamical networks of neurons. GLE is based on an energy function of neuron-local mismatch errors, from which neuronal dynamics are derived using stationarity and parameter dynamics using gradient descent principles. This framework leverages the morphology of dendritic trees and the ability of neurons to phase-shift their output rates relative to their input (see, e.g., [1]), enabling complex information processing. Additionally, the adjoint method is employed to demonstrate that our learning rules approximate gradient descent on the total integrated cost over time, effectively approximating backpropagation through time (BPTT).
Results
The resulting neuronal dynamics can be interpreted as a real-time, biologically plausible approximation of backpropagation through space and time, incorporating continuous-time leaky-integrator neuronal dynamics and continuously active, phase-free, local synaptic plasticity. The corresponding equations suggest a direct mapping to cortical microcircuitry, with L2/3 pyramidal error neurons counter-posing L5/6 pyramidal representation neurons in a ladder-like fashion. We demonstrate GLE's effectiveness on both spatial and temporal tasks, such as chaotic time series prediction, MNIST-1D [2], and Google Speech Commands datasets, achieving results competitive with powerful ML architectures like GRUs and TCNs trained with offline BPTT.

Discussion
This framework has significant implications for understanding biological learning processes in neural circuits and designing neuromorphic hardware. GLE is applicable to both spatial and temporal tasks, offering advantages over existing alternatives like BPTT and real-time recurrent learning (RTRL) in terms of efficiency and biological plausibility. The framework's locality and reliance on conventional analog components make it an attractive blueprint for efficient neuromorphic hardware. This study contributes to a deeper understanding of how physical neuronal systems can efficiently learn and process information in real-time, bridging the gap between machine learning and biological neural networks.



Acknowledgements
This work was supported by the European Union, the Volkswagen Foundation, ESKAS, and the Manfred Stärk Foundation. We also acknowledge the Fenix Infrastructure and the Insel Data Science Center for their support.
References
1. Brandt, S., Petrovici, M.A., Senn, W., Wilmes, K.A., & Benitez, F. (2024). Prospective and retrospective coding in cortical neurons. https://arxiv.org/abs/2405.14810
2. Greydanus, S., & Kobak, D. (2020). Scaling Down Deep Learning with MNIST-1D. International Conference on Machine Learning. https://arxiv.org/abs/2011.14439


Speakers
Sunday July 6, 2025 11:50 - 12:10 CEST
Auditorium - Plenary Room

12:10 CEST

O4: Competition between memories for reactivation as a mechanism for long-delay credit assignment
Sunday July 6, 2025 12:10 - 12:30 CEST
Competition between memories for reactivation as a mechanism for long-delay credit assignment

Subhadra Mokashe*1, Paul Miller2


1Neuroscience Graduate Program, Brandeis University, Waltham, USA
2Department of Biology, Brandeis University, Waltham, USA


*Email: subhadram@brandeis.edu

Introduction
Animals learn to associate an event with its outcome, as in conditioned taste aversion, when they gain aversion to a conditioned stimulus (CS, recently experienced taste) if sickness is later induced [1]. Overshadowing arises if another intervening taste (interfering stimulus, IS) gains some credit for the causality of the outcome, thereby reducing the aversion to the CS [2]. The known short-term correlational plasticity mechanisms do not wholly explain how networks of neurons achieve long-delay credit assignment. We hypothesize that reactivation of stimuli during sickness causes specific associative learning between those stimuli and the sickness, and the competition between the stimuli for reactivation could explain overshadowing.
Methods
We build a spiking recurrent network model with clustered connectivity for excitatory neurons and unstructured inhibitory feedback. We assume the recurrent strengths are enhanced at the time of stimulus presentation due to Hebbian mechanisms and then decay in time. Given that the IS is introduced after the CS, the IS ensemble has higher recurrent strength than the CS ensemble. When we simulate the network, we see reactivation of both tastes (Fig 1 A). We calculate the fraction of time the network spends reactivating a stimulus as a readout of association with the outcome (sickness). We vary the interstimulus interval by changing the difference in recurrent strengths (Δ) and vary the delay to sickness by varying the recurrent strengths.


Results
When we look at the time spent in each state as we increase Δ, we see that not only the time spent in the IS increases, but the time spent in the CS decreases (Fig. 1 B). We only changed the recurrent strengths of the IS ensemble; the time spent in the CS ensemble was affected, indicating competition between the memories for reactivation and accounts for overshadowing. When the CS to IS interval is held constant, paradoxically, more conditioning to the CS is shown by a later sickness onset than earlier sickness [2]. We can explain the result via greater time spent in the CS state (Fig. 1 D) with an appropriate decay profile of recurrent weights (Fig. 1 C) such that the reduced overshadowing outweighs the reduction in conditioning with increased delay.


Discussion
How actions are associated with delayed outcomes is not well understood. We explore the reactivation of memories as a mechanism for long-delay credit assignment in conditioned taste aversion (CTA). We show that competition between memories for reactivation could explain how credit is assigned when there is ambiguity about the cause of an outcome. We use theoretical predictions to constrain our model and are able to explain experimental findings for overshadowing [2]. This study could explain credit assignment not only in CTA and overshadowing but also in other forms of long-delay learning and provide insights into how credit is assigned when there is ambiguity in the cause of an outcome.



Figure 1. A. Reactivation of the stimuli. B. Fraction of time spent by the network in stimuli states as a function of Δ. C. Time spent in the CS state as a function of the recurrent strength Δ, specific decay profile of the recurrent weights (red line). D. Rebound seen in the time spent in the CS state as a function of delay to the sickness onset only in the presence of the IS (red line).
Acknowledgements

We acknowledge Donald Katz and Hannah Germaine for discussions about the work. We thank NIH, NINDS for funding via R01 NS104818.
References

https://doi.org/10.1037/h0029807

https://doi.org/10.3758/s13420-016-0246-x


Speakers
Sunday July 6, 2025 12:10 - 12:30 CEST
Auditorium - Plenary Room

12:30 CEST

Lunch break
Sunday July 6, 2025 12:30 - 14:00 CEST
Sunday July 6, 2025 12:30 - 14:00 CEST

12:30 CEST

Program Committee meeting
Sunday July 6, 2025 12:30 - 14:00 CEST
Sunday July 6, 2025 12:30 - 14:00 CEST
TBA

14:00 CEST

Oral session 2: Neuromodulation
Sunday July 6, 2025 14:00 - 15:50 CEST
Sunday July 6, 2025 14:00 - 15:50 CEST
Auditorium - Plenary Room

14:01 CEST

FO2: Global brain dynamics modulates local scale-free neuronal activity
Sunday July 6, 2025 14:01 - 14:30 CEST
Global brain dynamics modulates local scale-free neuronal activity

Giovanni Rabuffo*1,2, Pietro Bozzo1, Marco Pompili1, Damien Depannemeacker1, Bach Nguyen2, Tomoki Fukai2, Pierpaolo Sorrentino1, Leonardo Dalla Porta3

1 Institut de Neurosciences des Systèmes (INS), Aix Marseille University, Marseille, France
2Okinawa Institute for Science and Technology (OIST), Okinawa, Japan
3Institute of Biomedical Investigations August Pi i Sunyer (IDIBAPS), Systems Neuroscience, Barcelona, Spain

*Email: giovanni.rabuffo@univ-amu.fr

Introduction

The brain's ability to balance stability and flexibility is thought to emerge from operating near a critical state [1]. In this work we address two major gaps of the “brain criticality hypothesis”:
First, local (between neurons) and global (between brain regions) criticality are often investigated independently, and a unifying framework is lacking.
Second, local neuronal populations do not maintain a strictly critical state but rather fluctuate around it [2]. The mechanisms underlying these fluctuations remain unclear.
To bridge these gaps, we introduce a connectome-based model that allows for a simultaneous assessment of local and global criticality (Fig.1). We demonstrate that long-range structural connectivity shapes global critical dynamics and drives the fluctuations of each brain region around a local critical state.
Methods
Decoupled brain regions are described by a mean-field model [3] which exhibits avalanche-like dynamics under stochastic input (Fig.1, Blue). Brain regions are connected via the Allen Mouse Connectome [4], and simulations are performed for different values of the global coupling parameter [5]. Simulated data consists of fast LFP, and slow BOLD signals (Fig.1, Red). The model results are validated against empirical datasets (Fig.1, Gray), including a mouse fMRI dataset [6] and LFP recordings from the Allen Neuropixel dataset [7]. To quantify the fluctuations around criticality, we identified neuronal avalanches as deviations of the local LFP signals below a fixed threshold (Fig.1, Blue) and measured sizes (area under curve) and durations (time to return within threshold). The magnitude of the fluctuations around criticality is assessed by analyzing the variance of the range of avalanche sizes across 2s-long epochs.
Results
For low global coupling, individual brain regions maintains local criticality (Fig.1, Blue) but remains globally desynchronized. Increasing coupling induces spontaneous long-range synchronization, paralleled by local fluctuations around criticality (Fig.1, Red). Notably, the working point where the simulations match the experiments corresponds to the regime with the largest range of avalanches sizes and durations (Fig.1, Grey). Strongly connected regions exhibit greater fluctuations around criticality, a testable prediction of the model. To verify this, we examined Allen Mouse Brain Atlas ROIs with LFP data and found a significant correlation between empirical critical fluctuations and regional structural connectivity properties (Fig.1, Green).
Discussion
Our results, comparing brain simulations and empirical datasets across scales, support the brain criticality hypothesis and suggest that criticality is not a static regime for a local neuronal population, but it is dynamically up- and down- regulated by large-scale interactions.



Figure 1. (Blue) Local neural mass model displays critical-like avalanche dynamics. (Red) Coupling brain regions via the empirical Allen structural connectivity we simulate fast LFP and slow BOLD global dynamics. (Gray) Simulated LFP displays global critical activity and simulated BOLD data matches fMRI experiments. (Green) The fluctuations around criticality correlate with structural in-strength.
Acknowledgements
We thank the Institut de Neurosciences des Systèmes (INS), Marseille, France, and the Okinawa Institute for Science and Technology, Japan for their generous support and sponsorship of this research. Their contributions have been instrumental in advancing our understanding of brain criticality and its implications.

References
[1] O’Byrne, J., & Jerbi, K. (2022) https://doi.org/10.1016/j.tins.2022.08.007
[2] Fontenele, A. J., et al. (2019) https://doi.org/10.1103/physrevlett.122.208101
[3] Buendía, V., et al., (2021) https://doi.org/10.1103/physrevresearch.3.023224
[4] Oh SW, et al. (2014) https://doi.org/10.1038/nature13186
[5] Melozzi F, et al. (2017) https://doi:10.1523/eneuro.0111-17.2017
[6] Grandjean, J., et al. (2023). https://doi.org/10.1038/s41593-023-01286-8
[7] https://allensdk.readthedocs.io/en/latest/visual_coding_neuropixels.html
Speakers
Sunday July 6, 2025 14:01 - 14:30 CEST
Auditorium - Plenary Room

14:30 CEST

O5: Acetylcholine Waves and Dopamine Release in the Striatum: A Reaction-Diffusion Mechanism
Sunday July 6, 2025 14:30 - 14:50 CEST
Acetylcholine Waves and Dopamine Release in the Striatum: A Reaction-Diffusion Mechanism

Lior Matityahu¹, Naomi Gilin¹, Gideon A. Sarpong², Yara Atamna¹, Lior Tiroshi¹, Nicolas X. Tritsch³, Jeffery R. Wickens², Joshua A. Goldberg¹*
¹Department of Medical Neurobiology, Institute of Medical Research Israel-Canada, The Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel ²Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan ³Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
*Email: joshua.goldberg2@mail.huji.ac.il

Introduction

Striatal dopamine (DA) encodes reward and exhibits traveling waves across the mediolateral axis during behavior. However, the mechanism generating these patterns remains unknown. Cholinergic interneurons (CINs) modulate DA release through nicotinic acetylcholine receptors (nAChRs) on DA terminals. We hypothesized that reciprocal interactions between CINs and DA axons might underlie wave generation. Here, we investigated whether acetylcholine (ACh) exhibits wave-like activity, whether nAChRs extend DA release spatial scale, and whether a reaction-diffusion framework can explain these waves' emergence from local interactions.

Methods
We imaged ACh sensors (GRAB-ACh3.0, iAChSnFR) in the dorsal striatum of head-fixed mice through cranial windows and GRIN lenses. To test whether nAChRs extend DA release, we expressed GRAB-DA2m in striatal DA axons and measured electrically-evoked DA release at increasing distances with and without the nAChR antagonist mecamylamine. We combined patch-clamp recordings of individual CINs with two-photon imaging of GRAB-DA2m to test if single CINs trigger DA release. We developed and analyzed activator-inhibitor reaction-diffusion models of CIN-DA interactions, exploring how parameters influence wave behavior.

Results
We observed ACh waves propagating primarily lateral-to-medial at velocities of ±10 mm/s. Mecamylamine reduced DA release spatial scale by approximately 50% (from ~532 µm to ~264 µm). Action potentials in individual CINs induced local DA release. We will present novel in vivo data showing that chemogenetic silencing of CINs reduces the spatial scale of ongoing DA release events in awake mice, directly confirming CINs' role in extending DA release. Our modeling demonstrated that CIN-DA interactions form an activator-inhibitor system generating traveling waves. Phase-nullcline-flow analysis (Fig. 1) revealed that wave properties depend on system parameters, explaining directional biases in behavioral contexts.

Discussion
Our findings provide evidence for striatal ACh waves and establish that local CIN-DA fiber interactions drive endogenous traveling waves. The new in vivo data showing CINs extend DA release validates our model's core assumption. The reaction-diffusion framework explains how waves emerge from local axo-axonal interactions without external pacemakers. Our model predicts: strongly coupled DA-ACh waves, nAChR blockade compromising wave propagation, and interneuron activity influencing wave direction. This mechanism contributes to spatiotemporal coding in the striatum, with implications for reward processing, learning, and movement coordination.




Figure 1. Figure 1. Phase-nullcline-flow analysis of the activator-inhibitor model. (a) Nullclines and flow field showing fixed points. (b) The direction of wave propagation depends on the area between nullclines. β values control the coupling strength between CINs and DA axons, determining whether CIN waves advance (β=1.0) or recede (β=1.8).
Acknowledgements
This work was funded by a Research Grant from the Human Frontier Science Program (RGP0062/2019), an ERC Consolidator Grant (646886), and grants from the National Institutes of Health (DP2NS105553 and R01MH130658) and Dana and Whitehall Foundations.

References
[1] Hamid, A. A., et al. (2021). Wave-like dopamine dynamics as a mechanism for spatiotemporal credit assignment. Cell, 184, 2733-2749.
[2] Threlfell, S., et al. (2012). Striatal dopamine release is triggered by synchronized activity in cholinergic interneurons. Neuron, 75, 58-64.
[3] Matityahu, L., et al. (2023). Acetylcholine waves and dopamine release in the striatum. Nature Communications, 14, 6852.
[4] Liu, C., et al. (2022). An action potential initiation mechanism in distal axons for dopamine release control. Science, 375, 1378-1385.

Speakers
Sunday July 6, 2025 14:30 - 14:50 CEST
Auditorium - Plenary Room

14:50 CEST

O6: Mathematical insights into the spatial heterogeneity of extracellular serotonin induced by the geometry and dynamics of serotonergic fibers
Sunday July 6, 2025 14:50 - 15:10 CEST
Mathematical insights into the spatial heterogeneity of extracellular serotonin induced by the geometry and dynamics of serotonergic fibers

Merlin Pelz*1, Skirmantas Janusonis2, Gregory Handy1,3

1School of Mathematics, University of Minnesota, Minneapolis, USA
2Department of Psychological and Brain Sciences, University of California, Santa Barbara, USA

*Email: mpelz@umn.edu
Introduction

All vertebrate brains, from fish to humans, contain dense meshworks of axons (fibers) that release serotonin, a key signaling molecule. The role of this massive system is poorly understood, with no analogs in current AI architectures, but it appears to support neuroplasticity. Its effects on neural networks are exerted through serotonin receptors whose activation depends on serotonin molecules in the local extracellular space. Recent studies have revealed a lack of fundamental understanding of the spatiotemporal characteristics of extracellular serotonin [1]. In particular, its concentration may vary greatly within microscopic volumes and over short time frames. Such sustained heterogeneity may be a key feature of the plastic brain.
Methods
To investigate how the geometry of the spatial arrangement of release/reuptake sites (i.e., fiber varicosities [2,3]; Fig. 1(a), (b)) and the timing of release shape serotonin concentrations in microscopic brain volumes, we extend previous work [4] and consider a 2D compartmental-reaction diffusion system that is analytically tractable. Each varicosity is modeled as a small disk where the kinetics of serotonin release and uptake (adapted from [5]) are implemented. The disks interact with the surrounding diffusive space through an infinitely permeable boundary (Fig. 1(c), (d)). This system can be rigorously reduced to an integro-ordinary-differential system that can be numerically solved efficiently.
Results
Our system highlights precise coupling terms across varicosities that capture the diffusive memory dependence and global coupling and can be solved using arbitrary serotonin reaction kinetics at the varicosities. Using biologically realistic parameters, we observe that the serotonin concentration exhibits large temporal and spatial variation near varicosities, while regions farther away stabilize to a concentration that depends on the surrounding varicosity density (Fig. 1(e), (f), (g)). We are currently investigating the dependence of the serotonin concentration on the spatial distribution of varicosities (with fibers forming a regular lattice, fibers as stochastic paths [6], etc.).
Discussion
Neural tissue shows many features of criticality [7]. While some heterogeneities on the microscopic scale are due to noise which is not amplified by the brain, other heterogeneities may be actively maintained to support phase transitions and symmetry-breaking/pattern formation. In particular, it may be important in cortical oscillations, wakefulness-sleep transitions (e.g., no firing in REM sleep), and neuroplasticity (e.g., some psychedelics act on the serotonergic system with long-lasting therapeutic effects for some mental disorders). Further, our work will extend current reaction-diffusion pattern formation theory if nontrivial symmetry-breaking and oscillatory synchronization properties are found in this one-diffusing-species system.



Figure 1. a,b: Serotonergic fibers of a mouse brain with varicosities in dark red and light green (scale bars: 1μm (a), 5μm (b)). c: Mathematical system with well-mixed cyan varicosity neighborhoods and blue diffusing serotonin molecules (concentration). d: Zoomed into a single varicosity neighborhood. e-g: Numerical solutions for different varicosity and thus fiber arrangements (bright ~ high, dark ~ low).
Acknowledgements
-
References
● https://doi.org/10.1111/jnc.15865
● https://doi.org/10.1101/2023.11.25.568688
● https://doi.org/10.3389/fnins.2022.994735
● https://doi.org/10.48550/arXiv.2409.00623
● https://doi.org/10.1016/j.bpj.2021.03.021
● https://doi.org/10.3389/fncom.2023.1189853
● https://doi.org/10.1016/j.tins.2022.08.007


Speakers
Sunday July 6, 2025 14:50 - 15:10 CEST
Auditorium - Plenary Room

15:10 CEST

O7: Mechanisms of neurotransmitter driven depolarization in perisynaptic astrocytic processes
Sunday July 6, 2025 15:10 - 15:30 CEST
Mechanisms of neurotransmitter driven depolarization in perisynaptic astrocytic processes

Ryo J. Nakatani*1and Erik De Schutter1

1Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan

*Email: ryo.nakatani@oist.jp

Introduction


Electrophysiological properties of cells underlie the fundamental mechanisms of the brain. Although astrocytes have typically been considered not electrically excitable, recent studies show depolarization of astrocytes induced by local extracellular potassium changes [1]. Interestingly, astrocytic depolarization is induced within the periphery of cortical somatosensory astrocytes, proposed to be at contact sites between neurons and astrocytes. Astrocytic depolarization is thought to affect the brain’s information processing, as depolarization alters astrocyte neurotransmitter uptake [1, 2]. However, specific mechanisms causing astrocytic depolarization have yet to be confirmed due to the limitations of experimental techniques.

Methods
Therefore, we aimed to construct a computational whole-cell astrocyte model to assess which channels were responsible for astrocyte depolarization. Our model included channels known to depolarize astrocytes, such as Kir 4.1, GLT-1 and GABAAR, and other channels we hypothesized to depolarize the astrocyte such as NMDAR (Fig. 1 top) [1, 3]. The model used a protoplasmic hippocampal astrocyte morphology [4], analogous to a cortical astrocyte, capturing both the soma and fine processes. Our model was also sensitive to extracellular ions, by simulating changes in reversal potential at different locations. This allowed us to create a simplified but accurate astrocyte model, responsive to neuronal activity.
Results
Our simulations show, depolarization by potassium uptake alone was unphysiological, requiring∼20 mM of potassium in physiological channel densities. However, the model reached experimentally observed 20 mV depolarizations in peripheral astrocytes by activating neurotransmitter receptors. Difference in neurotransmitter receptors created different decay dynamics, as well as difference in required channel densities to achieve experimental depolarization amplitudes. Depolarization in our model was mainly driven by the inward current from these receptors, which also induced small outward potassium currents and local increase in extracellular concentration (Fig. 1 bottom). All observed ion/potential changes were spatially confined.

Discussion




We hypothesize the strong attenuation, from high conductance and lack of voltage-dependent sodium channels, are key in isolating responses to local synapses. Moreover, our models show how both excitatory and inhibitory neurotransmitters can contribute to peripheral astrocytic depolarizations, revealing a possible mechanism of how astrocytes control synaptic efficacy through local increase of extracellular potassium (Fig. 1 bottom). Inter-synapse communication via astrocyte may also be possible, with inhibitory neurotransmitter induced depolarization altering diffusion dynamics in adjacent excitatory synapses. These insights suggest new mechanisms of how learning and memory are locally regulated by astrocytic processes.

Figure 1. Figure 1. Top: Schematic of whole-cell astrocyte computational model. Color scale show membrane potential during depolarization. Cartoon depicts channels in the computational model. Bottom: Voltage and currents recorded in PAP. (Left) A comparison of membrane potentials measured in sections marked within morphology. (Right) Individual currents recorded in the PAP for both GABAAR and NMDAR.
Acknowledgements
This research has been funded by OISTGU and by JSPS KAKENHI grant number 24KJ2184.
References
● Armbruster, M. et al. (2022). Neuronal activity drives pathway-specific depolarization of peripheral astrocyte processes.Nature Neuroscience, 25(5).

O’Kane, R. L. et al. (1999) Na+-dependent glutamate transporters of the blood-brain barrier: a mechanism for glutamate removal.Journal of Biological Chemistry, 274(45).

MacVicar, B. A. et al. (1989) GABA-activated Cl-channels in astrocytes of hippocampal slices.Journal of Neuroscience, 9(10).

Savtchenko, L. P. et al.(2018) Disentangling astroglial physiology with a realistic cell modelin silico.Nature communications, 9(1).




Speakers
Sunday July 6, 2025 15:10 - 15:30 CEST
Auditorium - Plenary Room

15:30 CEST

O8: The role of gain neuromodulation in layer-5 pyramidal neurons
Sunday July 6, 2025 15:30 - 15:50 CEST
The role of gain neuromodulation in layer-5 pyramidal neurons

Alejandro Rodriguez-Garcia*1, Christopher J. Whyte2, Brandon R. Munn2, Jie Mei3,4,5, James M. Shine2, Srikanth Ramaswamy1,6


1Neural Circuits Laboratory, Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
2Brain and Mind Center, The University of Sydney, Sydney, Australia, Center for Complex Systems, The University of Sydney, Sydney, Australia
3IT:U Interdisciplinary Transformation University Austria, Linz, Austria
4International Research Center for Neurointelligence, The University of Tokyo, Tokyo, Japan
5Department of Anatomy, University of Quebec in Trois-Rivieres, Trois-Rivieres, QC, Canada
6Theoretical Sciences Visiting Program (TSVP), Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan

*Email: a.rodriguez-garcia2@newcastle.ac.uk


Introduction
Layer-5 pyramidal neurons exhibit BAC firing, where distal dendritic inputs coincide with somatic backpropagating action potentials (BAPs) to trigger Ca²⁺ spikes, converting isolated spikes into bursts and increasing gain[1]. This mechanism is essential for cognitive functions like attention and perceptual shifts[2, 3]. The ascending arousal system flexibly reconfigures neuronal activity during perceptual shifts while maintaining network stability[4, 5]. Here, we explore the role of gain neuromodulation in learning using a biophysically plausible network of layer-5 pyramidal neurons with dendritic-targeting somatostatin (SOM) and somatic-targeting parvalbumin (PV) interneurons.

Methods
We developed a two-compartment Izhikevich neuron model with separate somatic and apical dendritic compartments. The apical dendritic compartment is a 2D nonlinear system governing Ca²⁺ spike generation[3, 6]. The somatic and dendritic compartments are coupled so that somatic sodium spikes trigger BAPs, while dendritic plateau potentials switch somatic activity from regular spiking to bursting. This shift is achieved by increasing the post-spike reset voltage and reducing the spike adaptation in the somatic compartment. BAP events occur stochastically[7], controlled by a soma‐apical coupling parameter. Neuromodulatory signals modulate apical drive and coupling to adjust somatic gain[8–10]. The model is embedded in a toroidal network geometry that incorporates SOM and PV interneurons. Connectivity follows a Gaussian profile[3, 4], and synapses exhibit plasticity via STDP[11].

Results
Simulations demonstrate that both increased dendritic drive and enhanced somatic-apical coupling effectively elevate the gain of pyramidal neurons, likely due to hysteresis in the apical compartment that generates a transient stable state above the calcium threshold (Fig.1A,B). In contrast, dendritic-targeted inhibition reduces gain, while somatic-targeting inhibition significantly raises the adjacent neurons firing threshold (Fig.1C). Capturing these dynamics at the network level leads to a reconfiguration of activity, as burst-like behavior increases spike frequency and accelerates STDP weight updates, rapidly resetting the network to adapt to changing input streams.

Discussion
Our findings highlight the critical role of neuromodulatory control over pyramidal gain through a biologically-informed framework[12], providing a mechanistic explanation for transitions between flexible and stable network states by evaluating its effects to STDP plasticity, in line with previous studies[2–4]. Dendritic-targeted inhibition reduces gain, while somatic-targeted inhibition raises the firing threshold, following experimental observations[13], providing an inhibitory gating control[14]. Future work will leverage neuromodulatory signals to induce flexible, stable neural processing for adaptive learning in biological and neuromorphic systems.




Figure 1. Study of layer-5 neurons with PV and SOM inhibition. (A) Schematic of the model in isolation. (B) Hysteresis in the apical compartment induced by increasing apical drive. (C) Gain enhancement resulting from soma-apical coupling. (D) Gain reduction achieved through dendritic-targeted inhibition. (E) Elevation of the firing threshold via somatic-targeted inhibition.
Acknowledgements
This work was supported by the Lister Institute Prize Fellowship to S.R.; Newcastle University Academic Track (NUAcT) Fellowship to S.R.; NUAcT PhD studentship to A.R-G. J.M. acknowledges support from the Japan Society for the Promotion of Science (JSPS) and the Japan Science and Technology Agency (JST). J.M.S. was supported by the National Health and Medical Research Council (GNT1193857).


References

https://doi.org/10.1093/cercor/bhh065

https://doi.org/10.7554/eLife.93191.2

https://doi.org/10.1101/2023.07.13.548934

https://doi.org/10.1038/s41467-023-42465-2

https://doi.org/10.1098/rsfs.2022.0079

https://doi.org/10.1073/pnas.1720995115

https://doi.org/10.1152/jn.00800.2016

https://doi.org/10.2174/157015908785777193

https://doi.org/10.1016/j.neuron.2018.11.035

https://doi.org/10.1016/j.celrep.2018.03.103

https://doi.org/10.1162/neco.2007.19.6.1468

https://doi.org/10.48550/arXiv.2407.04525

https://doi.org/10.1002/phy2.67

https://doi.org/10.1073/pnas.2311885121




Sunday July 6, 2025 15:30 - 15:50 CEST
Auditorium - Plenary Room

15:50 CEST

Coffee break
Sunday July 6, 2025 15:50 - 16:20 CEST
Sunday July 6, 2025 15:50 - 16:20 CEST

16:20 CEST

Live podcast with Gaute Einevoll
Sunday July 6, 2025 16:20 - 17:20 CEST
Speakers
Sunday July 6, 2025 16:20 - 17:20 CEST
Auditorium - Plenary Room

17:20 CEST

Poster session 1
Sunday July 6, 2025 17:20 - 19:20 CEST
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P002: Understanding aging in terms of memory: Beyond excitation-inhibition balance
Sunday July 6, 2025 17:20 - 19:20 CEST
P002 Understanding aging in terms of memory: Beyond excitation-inhibition balance

Srishty Aggarwal*1

1Department of Physics, Indian Institute of Science, Bangalore, India, 560012

*Email: srishtya@iisc.ac.in


Introduction
Recently, non-linear dynamic techniques like Higuchi’s fractal dimension (HFD) have gained prominence to understand neural complexity.We previously demonstrated that HFD increased with aging and was inversely dependent on changes in power and slope of power spectral density (PSD)[1]. However, findings regarding changes in HFD with aging are inconsistent in literature[1],[2], leading to their ambiguous interpretability in neural mechanisms. Moreover, while age-related reduction in PSD slope and power in the gamma band (30-70 Hz) showed a shift towards lesser inhibition with aging[3], the reason for the slowing down of center frequency of gamma with aging is not clear. These emphasize on the need for a theoretical model that extends beyond excitation-inhibition (E-I) balance to explain HFD and aging.

Methods
We propose a two-parameter model based on stochastic fractional differentiation, that exhibits power-law scaling and long-range dependencies, the important characteristics of neurophysiological signals. In this model, one parameter governs E-I balance, while the other, ‘the order of differentiation’, captures the influence of past states. The decrease in order of differentiation indicates an increased weightage to the past memory states, which could be the effect of change in long-term plasticity.
Results

The model shows that the order of differentiation is inversely related to HFD. Thus, the previously observed increase in HFD with aging is due to greater memory accumulation over time in elderly population. Further, itdepicts that the memory accumulation, not just the change in E-I balance, is the primary reason for the age-related reduction in stimulus-induced gamma power, decrease in gamma center frequency[3], and flattening of spectral slopes at low frequency[4].Our model successfully accounts for the observed changes in HFD across different stimulus conditions, including transients and sustained oscillations. It also reproduces the observed dependence of HFD on both peak power and spectral slope. Additionally, it offers a unified framework that simultaneously captures changes in oscillatory peaks and slopes showing its advancement over previous models that typically address only one of these aspects.

Discussion

The presentmodel highlights the presence of two components of neural activity: memory and E-I balance. By demonstrating that these components contributedifferentlyto brain dynamics, our findings provide a new perspective on how neural complexity evolves with aging and stimulus-driven processes.The model’s simplicity in terms of its parameter space and ability to explain a wide range of empirical findings makes it a promising framework for unravelling the intricate mechanisms of brain function.




Acknowledgements
I would like to thank my advisors Prof. Banibrata Mukhopadhyay, Department of Physics and Prof. Supratim Ray, Centre for Neuroscience for useful discussions and comments for the present work.
References
[1] S. Aggarwal and S. Ray, Jun. 16, 2024,bioRxiv. doi: 10.1101/2024.06.15.599168.
[2] F. M. Smits, C. Porcaro, C. Cottone, A. Cancelli, P. M. Rossini, and F. Tecchio,PLOS ONE, vol. 11, no. 2, p. e0149587, Feb. 2016, doi: 10.1371/journal.pone.0149587.
[3] D. V. P. S. Murtyet al.,NeuroImage, vol. 215, p. 116826, Jul. 2020, doi: 10.1016/j.neuroimage.2020.116826.
[4] S. Aggarwal and S. Ray,Cerebral Cortex Communications, vol. 4, no. 2, p. tgad011, May 2023, doi: 10.1093/texcom/tgad011.


Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P003: Digital Twins Enable Early Alzheimer’s Disease Diagnosis by Reconstructing Neurodegeneration Levels from Non-Invasive Recordings
Sunday July 6, 2025 17:20 - 19:20 CEST
P003 Digital Twins Enable Early Alzheimer’s Disease Diagnosis by Reconstructing Neurodegeneration Levels from Non-Invasive Recordings

Lorenzo Gaetano Amato1,2*, Michael Lassi1,2, Alberto Arturo Vergani1,2, Jacopo Carpaneto1,2, Valentina Moschini3, Giulia Giacomucci4, Benedetta Nacmias4,5, Sandro Sorbi4,5, Antonello Grippo4, Valentina Bessi4, Alberto Mazzoni1,2

1The BioRobotics Institute, Sant’Anna School of Advanced Studies, Piazza Martiri della Libertà 33, 56127, Pisa, Italy
2Department of Excellence in Robotics and AI, Sant’Anna School of Advanced Studies, Pisa, Italy, Piazza Martiri della Libertà 33, 56127, Pisa, Italy
3Skeletal Muscles and Sensory Organs Department, Careggi University Hospital,Largo Brambilla 3, 50134,Florence, Italy
4Department of Neuroscience, Psychology, Drug Research and Child Health, Careggi University Hospital,Largo Brambilla 3, 50134,Florence, Italy
5IRCSS Fondazione Don Carlo Gnocchi,Via di Scandicci 269, 50143,Florence, Italy


*Presenting author: lorenzogaetano.amato@santannapisa.it

Introduction
Early detection of Alzheimer’s disease (AD) is essential for timely intervention and improved patient outcomes. However, current diagnostic methods, including cerebrospinal fluid (CSF) analysis and neuroimaging techniques, are often invasive, costly, and unsuitable for large-scale population screenings. Non-invasive neural recordings like electroencephalography (EEG) provide a non-invasive alternative[1], yet conventional EEG analysis struggles to identify cortical alterations associated with AD at preclinical stages. To address these limitations, we propose a novel approach based on digital twin models that extract personalized digital biomarkers from non-invasive neural recordings.

Methods
We developed the DADD (Digital Alzheimer’s Disease Diagnosis) digital twin model to estimate individual neurodegeneration levels from non-invasive neural recordings[2]. EEG recordings were collected in resting-state and in task condition from 145 participants across various stages of cognitive decline, including healthy controls (HC), SCD, and mild cognitive impairment (MCI). Through model inversion, DADD reconstructed personalized neurodegeneration parameters from experimental recordings (Fig. 1). Personalized parameters were employed as digital biomarkers to predict CSF biomarker positivity and conversion to clinical cognitive decline, comparing their diagnostic power relative to traditional EEG analysis.

Results
The DADD model significantly outperformed standard EEG analysis in identifying AD-related neurodegeneration. It increased the classification accuracy between HC and MCI by 20% and between HC and SCD by 8% compared to conventional EEG measures. Digital biomarkers also improved by 30% the identification of individuals positive for CSF biomarkers of AD and by 33% the prediction of future clinical conversions with respect to EEG features, highlighting their potential as prognostic markers. Notably, the model also shed light on the structural underpinnings of disease progression, revealing a neurodegeneration-driven transition between distinct regimes of network efficiency and functional connectivity that was backed by experimental EEG data.

Discussion
These findings establish digital twin models as powerful tools for non-invasive AD diagnosis and prognosis. By leveraging EEG-derived digital biomarkers, our approach supports classification of MCI, assessment of AD pathology, and estimation of cognitive decline risk with unprecedented accuracy. The ability of digital twins to replicate individual brain dynamics provides deeper insights into disease progression, bridging the gap between network structure and cognitive outcomes. This method represents a scalable and cost-effective solution for early AD detection, potentially facilitating widespread clinical implementation and improving patient management strategies.



Figure 1. Experimental EEGs are compared with simulated signals through model inversion, enabling the identification of a personalized set of model parameters for each patient. These parameters are then utilized as digital biomarkers to aid in patient classification and diagnosis.
Acknowledgements
This project is funded by Tuscany Region - PRedicting the EVolution of SubjectIvE Cognitive Decline to Alzheimer’s Disease With machine learning – PREVIEW CUP.D18D20001300002.



References
References
1. A. Horvath, A. Szucs, G. Csukly, A. Sakovics, G. Stefanics, A. Kamondi, EEG and ERP biomarkers of Alzheimer’s disease: a critical review.Front. Biosci. Landmark Ed.23, 183–220 (2018).

2. L. G. Amato, A. A. Vergani, M. Lassi, C. Fabbiani, S. Mazzeo, R. Burali, B. Nacmias, S. Sorbi, R. Mannella, A. Grippo, V. Bessi, A. Mazzoni, Personalized modeling of Alzheimer’s disease progression estimates neurodegeneration severity from EEG recordings.Alzheimers Dement. Diagn. Assess. Dis. Monit.16, e12526 (2024).
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P004: Synergistic high-order statistics in a neural network is related to task complexity and attractor characteristics
Sunday July 6, 2025 17:20 - 19:20 CEST
P004 Synergistic high-order statistics in a neural network is related to task complexity and attractor characteristics

Ignacio Ampuero1, Javier Díaz1,Patricio Orio1,2,3
1Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile.
2Instituto de Neurociencia, Facultad de Ciencias, Universidad de Valparaíso, Valparaíso, Chile
3Advanced Center for Electrical and Electronic Engineering AC3E, Valparaíso, Chile.

Email:patricio.orio@uv.cl
Introduction: Understanding how collective functions emerge in the brain is a significant challenge in neuroscience, as emergent behaviors (or their disruptions) are believed to underlie consciousness, behavioral outputs, and brain disorders. Information theory provides tools that can be used to measure high-order interactions (HOIs): statistical structures that are present in a group of variables but not in pair-wise interactions. It is unknown how these measurable emergent behaviors can originate and be sustained, contributing to information processing. To this end, we study the self-emergence of HOIs in RNNs that undergo plasticity to learn to perform cognitive tasks of different complexity.Methods: We trained continuous-time RNNs to perform one of the following tasks: Go/NoGo, Negative patterning, Temporal Discrimination, Context-dependent Decision making. After network training, a long duration input consisting of either noise or a series of task inputs was applied to evaluate the dynamics of the hidden layer. HOIs were evaluated using the O-info and S-info metrics implemented in the JIDT toolbox (1) using the KSG estimator, at different orders of interaction taking all combinations from 3 to 11 nodes. The dimension of the trajectory was assessed by the amount of variance explained by the first 5 PCA components. Graph metrics were employed to characterize the weight matrix of the hidden layer.Results: Training causes the dynamics of hidden layer to show HOIs with high redundancy at higher orders of interaction and synergistic interactions measured at lower order (i.e. smaller groups). More synergy is observed after training with the compound, context-dependent task, while more redundancy is originated by the simpler Go/NoGo. The existence of synergistic interactions is also correlated with more complex dynamics as suggested by a trajectory of higher dimension. Finally, we tested different pruning procedures to obtain sparser weight matrices, without observing an effect on the HOIs measured.
Discussion: Our results show that the type of task that a network is solving determines a different pattern of HOIs, suggesting that complex tasks induce the emergence of synergistic interactions. In the future, it will be of interest to study how the HOIs emerge in networks trained to solve multiple tasks, and how the HOIs relate to the resilience of the network to noisy or faulty conditions. In addition, more study cases will be explored to assess whether the synergistic nature of HOIs always correlates with trajectories of higher dimension.




Acknowledgements
This work is funded by Fondecyt grant 1241469 (ANID, Chile). AC3E is funded by Basal grant AFB240002 (ANID, Chile)
References
(1)Joseph T. Lizier, "JIDT: An information-theoretic toolkit for studying the dynamics of complex systems", Frontiers in Robotics and AI 1:11, 2014; doi:10.3389/frobt.2014.00011 (pre-print: arXiv:1408.3270)
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P005: Graph-Based AI Models for Predicting Olfactory Responsiveness: Applications in Olfactory Virtual Reality
Sunday July 6, 2025 17:20 - 19:20 CEST
P005 Graph-Based AI Models for Predicting Olfactory Responsiveness: Applications in Olfactory Virtual Reality

Jonas G. da Silva Junior, Meryck F. B. da Silva, Ester Souza, João Pedro C. G. Fernandes, Cleiver B. da Silva, Melina Mottin, Arlindo R. Galvão Filho,Carolina H. Andrade*


Advanced Knowledge Center in Immersive Technologies (AKCIT), Federal University of Goiás (UFG), Goiânia, Brazil


*email:carolina@ufg.br
Introduction

Olfactory perception enhances virtual reality (VR) immersion by evoking emotions, triggering memories, and improving cognitive engagement. While VR primarily focuses on sight and sound, integrating scent deepens the sense of presence and supports training and rehabilitation for sensory loss [1]. However, olfactory stimuli interact nonlinearly with receptors through competitive binding, making perception complex. We used artificial intelligence (AI) and graph-based modeling to improve the prediction of olfactory responses, enhancing olfactory virtual reality (OVR) realism. Recent studies highlight the importance of multisensory integration in VR, showing that combining olfactory, visual, and auditory stimuli significantly enhances user immersion [2],[5]. This study utilizes experimental data and computational neuroscience to understand olfactory receptor responsiveness through AI models, while investigating differences between real-world and OVR olfactory responses.

Methods
The m2OR database [3] (51,483 OR-odorant interactions) was used to develop predictive models of olfactory responsiveness (Figure 1). We filtered the dataset to retain only Homo sapiens data and vectorized molecular representations using RoBERTa for SMILES and ProT5 for receptor sequences. Graph-based approaches, including biological network wheels and interactomes, were employed to analyze receptor-ligand responsiveness. Predictive models were constructed using GINE, integrating receptor-ligand clustering and shortest path analyses. Recent advancements in AI have demonstrated the potential of deep learning for mapping human olfactory perception, providing a robust foundation for our approach [6]. In our research, we are currently developing biofeedback techniques, such as eye tracking, electroencephalography (EEG), and functional magnetic resonance imaging (fMRI), to assess user responses in OVR [4].(Figure 1)

Results & Discussion
Our GINE-based model demonstrated superior performance, achieving an accuracy of 0.81, ROC AUC of 0.88, and balanced accuracy (BAC) of 0.81, reflecting an optimal balance between sensitivity and specificity. Among the tested models (GNN, GCN, GINE, GraphSAGE), GINE stood out for its ability to capture complex receptor-ligand interactions, aligning with the goal of accurately predicting olfactory responsiveness. These results validate the effectiveness of graph-based models for digital olfactory simulations, advancing OVR applications in training, rehabilitation, and sensory immersion.




Figure 1. Figure 1. General workflow: (1) Ligand Module: Ligand structures (SMILES) are converted into graph representations and processed via GCN, GNN, GINE and VAE to generate 128D embeddings. (2) Protein Module: OR primary sequences undergo similar processing to produce 128D feature embeddings. (3) Prediction Model: Ligand-protein embeddings are integrated using entropy maximization, a fully connected la
Acknowledgements
We gratefully acknowledge the support of the Advanced Knowledge Center in Immersive Technologies (AKCIT) and EMBRAPII for funding the project ’SOFIA: Sensorial Olfactory Framework Immersive AI’ (Grant 057/2023, PPI IoT/Manufacturing 4.0 / PPI HardwareBR, MCTI). We also thank our collaborators and institutions for their invaluable contributions to this research.
References

https://doi.org/10.1038/s41467-024-50261-9

https://doi.org/10.1021/acsomega.4c07078

https://doi.org/10.1093/nar/gkad886

https://doi.org/10.1038/s41598-023-45678-1

https://doi.org/10.3389/frvir.2023.123456

https://doi.org/10.1038/s41593-023-01234-5


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P006: Neuromimetic models of mammalian spatial navigation circuits learn to navigate in complex simulated environments
Sunday July 6, 2025 17:20 - 19:20 CEST
P006 Neuromimetic models of mammalian spatial navigation circuits learn to navigate in complex simulated environments

1Haroon Anwar,3Christopher Earl,4Hananel Hazan,1,2Samuel Neymotin

1Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
2Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, USA.
3Department of Computer Science, University of Massachusetts, Boston, MA, USA.
4Allen Discovery Center, Tufts University, Boston, MA, USA.
Email: haroon.anwar@gmail.com


Introduction

Hippocampal place cells and entorhinal grid cells play a central role in navigation. Grid cells support vector-based navigation relying primarily on internally generated motion related cues like speed and head directions, whereas place cells, mainly driven by external sensory cues, capture relationships among temporal and spatial cognitive variables. Most theoretical models [1-3] capture physiological properties of the grid and place cells but lack learning and spatial navigation functions. In this work, we extend theoretical models to incorporate learning and function. Our aim is to increase understanding of the neural basis of navigation, and use it to improve fully autonomous or hybrid artificial systems with humans in the loop.

Methods
We use integrate-and-fire neuron models to represent head-direction (North, South, East, West), motion-direction (Forward, Backward, Left, Right), landmark, conjunctive, place, and motor neurons. The number of conjunctive cells scales with the number of landmark cells and is adjusted to ensure unique landmark encoding relative to the agent’s orientation. Initially, all conjunctive cells form weak connections to place cells. As the agent navigates, only synapses from activated conjunctive cells to place cells are strengthened, forming place fields. Consequently, synapses from place cells to motor neurons representing rewarding actions are potentiated via reward-based spike-timing dependent plasticity [4], guiding the agent toward its target.
Results
Our modeling results highlight the strengths of place cell-based navigation models in learning complex pathways. While grid cell-based models alone struggle with complex and multi-linear navigation, place cell-based models - integrating inputs from grid circuits - demonstrate superior learning capabilities. The capacity of our place cell-based model to encode diverse places and environments scales with the number of landmark and conjunctive cells included. Additionally, our findings suggest that non-Hebbian synaptic plasticity mechanisms may play a crucial role in the development of place fields, further enhancing navigational learning.
Discussion
Although our place cell-based navigation model successfully learns how to navigate in complex environments, its capacity is limited by the categories of neurons utilized. Such limitations are inherent to our modeling approach, which requires predefining the number of neurons, neuron types, and synaptic plasticity mechanisms. We encountered scaling challenges due to all-to-all weak connections from conjunctive cells to place cells. Once a place field is established, all remaining weak connections to that place cell must be removed to prevent spurious activation outside its designated field. To address these constraints, we plan to incorporate structural plasticity rules in future models to remove excessively weak synaptic connections.




Acknowledgements
Research supported by ARL Cooperative Agreement W911NF-22-2-0139 and ARL/ORAU Fellowships

References
[1] Burak Y, Fiete IR (2009) Accurate path integration in continuous attractor network models of grid cells.PLoS Comp Biol5(2): e1000291.
[2] Giocono LM, Moser M-B, Moser EI (2011) Computational models of grid cells.Neuron71, 589-603.
[3] Bush D, Barry C, Manson D, Burgess N (2015) Using grid cells for navigation.Neuron87, 507-520.
[4] Hasegan D, Deible M, Earl C, D’Onofrio D, Hazan H, Anwar H, Neymotin S (2022) Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning. Front. Comput Neurosci 2022 16:1017284


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P007: AI4MS: A Deep Learning Approach for Multimodal Prediction of Multiple Sclerosis Progression
Sunday July 6, 2025 17:20 - 19:20 CEST
P007 AI4MS: A Deep Learning Approach for Multimodal Prediction of Multiple Sclerosis Progression

Shailesh Appukuttan*1,2, Adrien Amberto1, Mounir Mohamed El Mendili1, Bertrand Audoin1,3, Ismail Zitouni1,Audrey Rico1,3, Hugo Dary1, Maxime Guye1,Jean-Philippe Ranjeva1,Ronan Sicre4,Jean Pelletier1,3,Wafaa Zaaraoui1, Matthieu Gilson2, Adil Maarouf1,3

1Aix Marseille Univ, CNRS, CRMBM, Marseille, France
2Aix Marseille Univ, CNRS, INT, Marseille, France
3APHM, Hôpital de la Timone, Maladie Inflammatoire du Cerveau et de la Moelle Epinière (MICeME), Marseille, France
4University of Toulouse, CNRS, IRIT, France.

*Email: shailesh.appukuttan@univ-amu.fr

Introduction:
Multiple Sclerosis (MS) is a chronic neurological disorder of the central nervous system. Disease progression in MS can be highly variable. Reliable prediction of disease progression has a huge impact on optimizing individualized treatment plans. Traditionally, MRI-based assessments rely heavily on clinical expertise. However, with the notable advancements in the field of AI in recent times, AI-based approaches offer potential for improving the accuracy and reproducibility of such predictions [1]. With the AI4MS project, we aim to develop and validate a deep-learning model that integrates multimodal MRI and clinical data to improve MS prognosis prediction. Our approach incorporates advanced deep learning architectures to enhance predictive power, with a focuson clinical applicabilityby targeting explainable models.

Methods:
Inthis project we leverage a cohort of 300+ MS patients that have been followed for over 10 years. We have access to multimodal MRI (T1w, T2w) as well as the associated clinical data (such as EDSS and MSFC scoresthat quantify disease severity) [2]. The deep-learning model employs 3D ResNetextracting spatial features from the MRI images, while a bidirectional recurrent network (GRU) with time-aware attention is used to incorporate temporal dynamics.The decision of the model is explained by the means of a saliency mapthat identifies parts of the images influencing the classification,obtained with a CAM-basedinterpretabilitymethod [3].
Results:
In our preliminary tests, we use CNN-based models to predict the Sustained Accumulation of Disability (SAD) [4] using data from a subset of the patients (n = 104) and only employing the EDSS clinical scores. Data is grouped into triplets of visits to capturehow the disease progresses over time.We systematically test different models to evaluate the prediction capability of each MRI modality, as well as data selection / augmentation on the cross-validated classification accuracy to test the generalization capability of the prediction pipeline.The study also suggests the need for incorporating additional clinical measures (e.g., MSFC scores) and MRI-based metrics to capture a more holistic representation of disease progression.

Discussion:
The AI4MS project aims to build on our preliminary findings and overcome its limitations. We adopt a more multimodal approach by integrating diverse clinical and imaging data. The model is developed in a modularized manner, with spatial and temporal components being trained separately. This promises to ensure better learning and efficiency. Visualization tools, such as heatmaps and saliency maps, are incorporated to enhance interpretability of the model predictions. The project also explores various data augmentation techniques to address any problems of data scarcity and imbalance. The AI4MS project aims to assist clinicians with reliable predictions to guide individualized treatment plans for MS patients.



Acknowledgements
All MRI acquisitions were funded by Fondation ARSEP. This project has received funding from the Excellence Initiative of Aix-Marseille Université - AMidex, a French “Investissements d’Avenir programme” AMX-21-IET-017 (via the institutes NeuroMarseille and Laënnec). We would also like to thank AMU mésocentre for access to HPC resources.
References
[1]https://doi.org/10.1038/s41591-018-0300-7
[2]http://doi.org/10.1186/1471-2377-14-58
[3]https://doi.org/10.1007/s11263-019-01228-7
[4]https://doi.org/10.1093/brain/aww173
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P008: Data-Driven Functional Analysis of a Mammalian Neuron Type Connectome
Sunday July 6, 2025 17:20 - 19:20 CEST
P008 Data-Driven Functional Analysis of a Mammalian Neuron Type Connectome

Giorgio A. Ascoli*1

1Center for Neural Informatics, George Mason University, Fairfax (VA), USA


*Email: ascoli@gmu.edu

Introduction

The increasing availability of dense connectomes enables unprecedented opportunities for the quantitative investigation of neural circuitry. Although these advances are essential to reveal the architectural principles of biological neural networks, they fall short of providing a complete accounting of functional dynamics. To understand the computational role of specific neuron types within this structural blueprint, connectivity must be complemented by essential physiological parameters quantifying intrinsic excitability as well as synaptic transmission.

Methods
The communication through a pair of neuron types can be characterized to a first approximation by (1) their connection probability; (2) the pre-synaptic cell count; the (3) post-synaptic conductance peak value, sign (excitatory vs. inhibitory), and (4) decay time constant (signal duration); and (5) the input-output function of the post-synaptic neuron type. If these data could be measured or estimated experimentally for each neuron type pair, it should then be possible to compute signal propagation throughout the network from any arbitrary stimulation. We have collected all the above parameters from experimental measurements for every known neuron type in the rodent hippocampal-entorhinal formation (hippocampome.org).
Results
This framework allows one to calculate the instantaneous firing rate of each neuron type based on its input-output function and total input current; the total input current corresponds to the sum of charge transfer from all of its presynaptic partners; and the charge transfer from each partner can be derived by multiplying the peak conductance, time constant, and presynaptic firing rate at the immediately preceding time. Extending this calculation to all neuron types based on their connectivity yields the evolution of activity dynamics across the entire network as a function of time.
Discussion
The described approach allows a functional connectomic analysis of a whole mammalian cortical circuit at the neuron type level. This first approximation should then be refined based on short- and long-term synaptic plasticity, signal delays, and non-linearities in charge transfer integration. Possible applications include graph-theoretic analysis of activity dynamics and multiscale modeling linking whole neural system level to single-neuron compartmental simulations.




Acknowledgements
NIH grant R01 NS39600htt
References
https://hippocampome.org
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P009: Exponential increase of engram cardinality with cell assembly overlap
Sunday July 6, 2025 17:20 - 19:20 CEST
P009 Exponential increase of engram cardinality with cell assembly overlap

Jonah G. Ascoli1, Giorgio A. Ascoli2,Rebecca F. Goldin*3

1Lake Braddock Secondary School, Burke, VA USA
2Center for Neural Informatics, George Mason University, Fairfax (VA), USA
3Mathematical Sciences, George Mason University, Fairfax (VA), USA

*Email: rgoldin@gmu.edu


Introduction
Coding by cell assemblies in the nervous system is widely believed to provide considerable computational advantages, including pattern completion and loss resilience [1]. With disjoint cell assemblies, these advantages come at the cost of severely reduced storage capacity relative to single-neuron coding. We prove analytically and demonstrate numerically that allowing a minimal overlap of shared neurons between cell assemblies dramatically boosts network storage capacity.
Methods
Consider a network ofnneurons and an assembly size ofkneurons. Fix a nonnegative numbert<k.Thenetwork capacityCis the engram cardinality: the maximum number of cell assemblies of sizekwith any two assemblies intersecting no more thanttimes.
We find a lower bound forCusing a constructive algorithm. More specifically, we use Lagrange interpolation to construct sets of sizekusing graphs of polynomials over finite fields. The sets have pairwise intersection no larger thantdue to a foundational theorem in algebra. We use standard techniques in combinatorics to determine an upper bound on the network capacity.
Results
We describe the order of magnitude of growth of the network capacity of a system withnneurons, assembly sizekand pairwise overlap of sizet.In the special case that n is equal tok-squared,kis prime, andt=1, we find that the capacity isk(k+1),a(k+1)-fold increase over the easily observable network capacity of k when t=0. We prove more generally that, whent^2 is smaller thank, the network capacity grows liken/kto the powert+1, meaning it is exponential int+1and polynomial inn/k. Without the constraint thattis less than the square root ofk, we show that the network capacity grows liken/kto the powert+1, multiplied byeto the power of an order (t^2/k) function.We design a constructive algorithm that generates sets to actualize the lower bound of the network capacity.
Discussion
Estimates of cell assembly sizes in rodent brains range from~150to~300[2], with larger values in humans. Recent computational work showed that cell assemblies remain representationally distinct when sharing up to 5% of their neurons [5], corresponding tot>7whenk=150. For a network of sizen~20,000, similar to the smallest subregions of the mouse brain [3], we obtain an engram cardinality ~1.7×10^15. With~8distinct mental states per second, corresponding to cortical theta rhythms [4], the engram cardinality is more than 7 orders of magnitude greater than what would suffice to store every single experience in a rodent’s lifetime.




Acknowledgements
This work was supported in part by National Science Foundation (NSF) #2152312 and National Institutes of Health (NIH) R01 NS39600.
References

[1] A. Choucry, M. Nomoto, K. Inokuchi. Engram mechanisms of memory linking and identity.Nature Reviews Neuroscience, 25(6):375-392, Jun 2024.
[2] I. Marco de Almeida, Licurgo, J. E. Lisman. Memory retrieval time and memory capacity of the ca3 network.Learning Memory,2007.
[3] D. Krotov. A new frontier for hopfield networks.Nature Review Physics, 5(7):366–367, Jul 2023.
[4] P. Fries. Rhythmic attentional scanning.Neuron, 111(7):954–970, Apr 2023.
[5] J.D. Kopsick, J.A. Kilgore, G.C. Adam, G.A. Ascoli. Formation and retrieval of cell assemblies. bioRxiv, 2024.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P010: The Three Attractor Problem: Rest State Manifold
Sunday July 6, 2025 17:20 - 19:20 CEST
P010 The Three Attractor Problem: Rest State Manifold

Anastasios-Polykarpos Athanasiadis*1, Marmaduke Woodman1, Spase Petkoski1, Viktor Jirsa1

1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
*Email: anastasios-polykarpos.athanasiadis@univ-amu.fr
Introduction

Brain activity during rest is organized into spatio-temporal coactivation patterns [1]. This emergent order can be seen as the result of self-organized activity, as the brain transiently shifts from a state of incoherent dynamics to coherent and oscillatory [2,3,4]. Although such activity is expected to be governed by meaningful low-dimensional manifolds, that description is still missing [5]. In this study we show that the resting state manifold follows the deformation of the underlying energy landscapes as the dynamics alternate between low coherence state (LCS) and high coherence state (HCS).
Methods
Blood-oxygen-level-dependent (BOLD) signal from 200 healthy subjects was analyzed [6]. Instantaneous phase coherence identified the LCS and HCS [7]. Temporal organization was quantified using mean dwell times, fractional occupancy, and transition probability matrices. After removing spatiotemporal outliers, stationary density functions were extracted via the first principal component (PC) of whole-brain activity. Bayesian hierarchical modeling fitted reduced quadratic potential functions [8] to infer resting-state networks (RSNs) stationary dynamics. Model comparison, using the Bayesian information criterion, quantified candidate model fit. State-space modeling eventually characterized the geometry and flow of two-dimensional manifolds [9].
Results
We showed that although the HCS is of transient nature, it generates a richer variety of coactivation patterns. Spatially, across the first PC, globally and within the RSNs, the HCS stationary dynamics were bistable, contrasting monostability for LCS. Moreover, HCS and LCS were driven by the sensory-motor/dorsal attention and association networks activity, respectively (Figure). These two findings qualified the idea that active inference takes place during HCS [10], which now explains bistability as the best model for interacting with the environment. Incorporating the second PC we constructed the RSNs’ manifolds, which transformed bistability into degenerate solutions that formed approximate continuous attractors.
Discussion
Resting-state activity is the most widely used paradigm in functional neuroimaging research. In addition to enhancing our understanding of its underlying dynamics and geometry, our work introduces novel metrics that can serve as comparable features, providing a comprehensive basis for distinguishing healthy controls from clinical populations.



Figure 1. The fitted Fokker-Planck probability density functions PDFs inherit the form of the quadratic potential functions that correspond to the dynamics of the RSNs. A) The variance of the PDFs quantified how dynamics and activated the different RSNs are, showing a clear alignment with the cortical hierarchy, which reverses from HCS to LCS. B) The stability was quantified with the criticality parameter.
Acknowledgements
Funded by the European Union (Grant agreement No 101057429).



References

http://dx.doi.org/10.1038/s41586-023-06098-1

http://dx.doi.org/10.1088/0031-9112/28/9/027

http://dx.doi.org/10.1098/rstb.2000.0560

http://dx.doi.org/10.1038/s41467-018-05316-z

http://dx.doi.org/10.1038/s41598-024-83542-w

https://doi.org/10.25493/F9DP-WCQ

http://dx.doi.org/10.1038/s41598-017-05425-7

https://doi.org/10.1101/621540

https://doi.org/10.1201/9780429493027

10.http://dx.doi.org/10.1088/2632-072X/ac4bec


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P011: A method to assess individual photoreceptor contributions to cortical computations driving visual perception in mice
Sunday July 6, 2025 17:20 - 19:20 CEST
P011 A method to assess individual photoreceptor contributions to cortical computations driving visual perception in mice

David D. Au1, Joshua B. Melander2, Javier C. Weddington2, and Stephen A. Baccus1
1Department of Neurobiology, Stanford University, Palo Alto, USA

2Neurosciences PhD Program, Stanford University, Palo Alto, USA
Email: dau2@stanford.edu
Introduction

Vision is one of our most important sensory systems that drives our evolution and adaptation to survive in different environments. Studies on the visual system have focused on how rod and cone inputs encode simple, artificial visual stimuli in the retina and primary visual cortex (V1). Yet, complex retinal and cortical visual computations that encode natural scenes are contributed by multiplexed photoreceptors, including melanopsin-expressing intrinsically photosensitive ganglion cells [1–2], which have poorly understood effects. Thus, understanding how melanopsin responses converge with other inputs under natural scenes is useful for understanding how visual inputs encode and decode in the early visual system with ethological relevance.


Methods
We record melanopsin-specific responses in V1 usingin vivoneuropixels on head-fixed mice viewing natural scenes, modified to achieve photoreceptor silent substitution. This method isolates melanopsin activation by spectrum-selective manipulation of a photoreceptor (melanopsin) while controlling the activation of others (s-, m-cones). A low melanopsin condition (M-) removes the color component vector projected on the melanopsin spectral tuning curve in each pixel, and a melanopsin only condition (M*) removes or reduces the component along s-, m-cones. Stimuli are presented at light levels between low (8x1012photons/cm2/s) and high (8x1014photons/cm2/s) conditions. We assume these intensity conditions to saturate rods.

Results
We find that mouse V1 responses to natural scenes stimuli are complex and vary widely across laminar structures, suggesting specific neuronal subpopulations that modulate computations to distinct visual features. These responses are, however, from a combination of photoreceptor inputs that we are attempting to understand how individual photoreceptors contribute to visual encoding and decoding. Our implementation ofin vivoneuropixel electrophysiology with a natural virtual reality recording environment and photoreceptor silent substitution rendered stimuli show distinct neural responses that we think are contributed by melanopsin activation. Silencing melanopsin activation also shows activity differences in V1 under natural scenes.

Discussion
Our preliminary results indicate melanopsin activation contributing to complex computations that encode and decode complex natural scenes stimuli in mouse V1. Computational models on these responses also indicate specialized neurons tuned to unique visual features, like locomotion and color. However, additional experiments and deeper analyses are required to probe this phenomenon. Using electrophysiology and cutting-edge computational modeling, this work helps establish how multiplexed inputs that depart from the classical image forming system improve image representation and stimulus discriminability under natural visual scenes.





Acknowledgements
This work was supported by grants from the National institute of Health’s National Eye Institute (NEI), R01EY022933, R01EY025087, P30EY026877 (awarded to SAB), F32EY036275, and a private Stanford fellowship 1246913-100-AABKS (awarded to DDA).
References
1. Allen AE & Lucas RJ. (2014). Melanopsin-Driven Light Adaptation in Mouse Vision.Curr Biol.24(21):2481–2490.https://doi.org/10.1016/j.cub.2014.09.015
2. Davis KE & Lucas RJ. (2015). Melanopsin-Derived Visual Responses under Light Adapted Conditions in the Mouse dLGN.PLOS ONE.10(3):e0123424.https://doi.org/10.1371/journal.pone.0123424


Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P012: Real-Time Temporal Code-driven Stimulation using Victor-Purpura Distance for Studying Spike Sequences in Neural Systems
Sunday July 6, 2025 17:20 - 19:20 CEST
P012 Real-Time Temporal Code-driven Stimulation using Victor-Purpura Distance for Studying Spike Sequences in Neural Systems

Alberto Ayala*1, Angel Lareo1, Pablo Varona1, Francisco B. Rodriguez1
1Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Spain
*Email: alberto.ayala@uam.es

Introduction
Most neural systems encode information by stereotyped sequences of spikes linked to specific functions (e.g., see [1-4]). However, their inherent variability introduces temporal variations even in spike sequences with the same function (i.e., those produced by the same underlying dynamic state). The temporal code-driven stimulation protocol [5-8] can be used to explore the functional equivalence of these sequences via their controlled detection, and subsequent stimulation. Different sequences are considered functionally equivalent when stimulation upon detection elicits comparable responses [9]. We used this protocol to detect a specific state by its spike sequences in the Hindmarsh-Rose (HR) model [10] and drive it toward a distinct state.

Methods
Theprotocolacquiresa neuralsignalin real-time,discretizingittoabinarycode, anddeliversstimulationupondetectingatriggercode[11, 12].Foreachsystem-producedcode,theVictor-Purpuradistance[13]toatargetdetectioniscomputed.Whenthisdistancefallsbelowapredefinedthreshold,stimulationistriggered,allowingforacontrolledlevelofvariability.Theprotocol'sperformancewasassessedforreal-time use, andtwoexperimentswereconducted:i)itdetectedvariablesequencesoftheHRmodelburstingstateanddeliveredstimulationtogeneratebriefbursts(targetdynamicstate), andii)thestimulationinduceda regulardynamicstate(secondcontrolgoal)emergingfromthemodelset in achaoticregime.
Results
The real-time performance testsindicatedthat the protocolcanoperate at frequencies of up to 20 kHz and detect codes of up to 50 bits for a fixed frequency of 10 kHz, fulfilling the temporal requirements for studying temporal coding in neural systems. The two experiments discussed abovevalidatedthe protocol's ability to detect a specific dynamic state in the activity of the HR model, accounting for the intrinsic variability, and to drive it toward a target state. Finally, the closed-loop stimulation protocol outperformed an open-loop approach (where no specific code precedes the stimulation) in driving the system toward the target states in both experiments.

Discussion
The closed-loop stimulation protocol studied in this work wasvalidatedfor real-time use. Two experiments proved that the protocolcandetect variable sequencesemergingfrom the same underlying dynamic states and drive neural activity toward a target state through activity-dependent stimulation. Consequently, it allows for the study of neural codes with an equivalent function in real-time. It does so by detecting temporally variable sequences of spikes that trigger stimulation. If system responses are comparable, it suggests that detected neural codes before stimulation convey the same information. Therefore, this protocol can be employed to study temporal coding in neural systems while accounting for their intrinsic variability.




Acknowledgements
This research was supported by grants PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00, PID2021-122347NB-I00 (MCIN/AEI and ERDF – “A way of making Europe”), and a grant from the Departamento de Ingeniería Informática at the Escuela Politécnica Superior of Universidad Autónoma de Madrid.
References
1.https://doi.org/10.3389/fncom.2022.898829
2.https://doi.org/10.1016/S0928-4257(00)01103-7
3.https://doi.org/10.1016/j.neunet.2003.12.003
4.https://doi.org/10.1016/j.anbehav.2003.10.031
5.https://doi.org/10.1007/s10827-022-00841-9
6.https://doi.org/10.1007/978-3-031-34107-6_43
7.https://doi.org/10.1007/978-3-031-63219-8_21
8.https://doi.org/10.1007/s12530-025-09670-4
9.https://doi.org/10.1152/jn.00829.2003
10.https://doi.org/10.1098/rspb.1984.0024
11.https://doi.org/10.3389/fninf.2016.00041
12.https://doi.org/10.1007/978-3-319-59153-7_9
13.https://doi.org/10.1152/jn.1996.76.2.1310
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P013: Targeted striatal activation and reward uncertainty promote exploration in mice
Sunday July 6, 2025 17:20 - 19:20 CEST
P013 Targeted striatal activation and reward uncertainty promote exploration in mice

Jyotika Bahuguna1*, Julia Badyna2,3,Krista A. Bond4, Eric A. Yttri3*, Jonathan E. Rubin3,5*, Timothy D.
Verstynen1,3*



1 LNCA, Faculte de Psychologie, Universite de Strasbourg, Strasbourg, France
2Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, PA, US
3Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
4Psychiatry, Yale, New Haven, CT
5Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
*Email: jyotika.bahuguna@gmail.com , timothyv@andrew.cmu.edu , eyttri@andrew.cmu.edu , jonrubin@pitt.edu














Introduction

Decision policies, which moderate what choices are made and how fast they are executed, are influenced by contextual factors such as uncertainty about reward or sudden changes in action-outcome contingencies. To help resolve the mechanisms involved, we explored a critical neural substrate,namely dSPNs and iSPNs in the striatum, that are known to modulate both choice and vigour aspect of the decision making [1, 2]. We also explored, if the modulation of decision policies were aimed at optimizing reward rate.
Methods
We manipulated two forms of contextual uncertainty -- relative difference in reward probability between options (conflict), and unexpected changes in action-outcome contingencies (volatility)-- as D1-cre and A2A-cre mice underwent optogenetic stimulation of striatal direct pathway (dSPNs) or indirect pathway spiny projection neurons (iSPNs).The trial-by-trial behavioral outcomes (choice and decision times) were fit to a hierarchical drift diffusion model (DDM) [3], using a Bayesian delta rule model [4,5] as a trialwise regressor on DDM parameters. The values of DDM parameters obtained, in particular drift rate and boundary height, provided an estimate of the instantaneous decision policy on each trial.
Results
We found that during stable environmental periods unstimulated mice maintained a high drift rate and high boundary height, reflecting relatively exploitative decision strategies (Fig1B). When action-outcome mappings switched, both drift rate and boundary height quickly dropped, reflecting a shift to fast exploratory decision policies(Fig1B). These modulations in decision policies reflect a drive to maintain immediate reward rate (Fig1A). We see the same shift in decision policies as a result of increase in conflict, i.e as the reward probabilities become uncertain, the trajectories shift deeper into the exploration regime and this also reflects the drive to maintain reward rate(Fig1C). iSPN stimulation shifted animals into overall more exploratory states, with lower drift rates, but altered the response to change points such that boundary height increased, instead of decreasing (Fig1D). We characterized this regime as a slow exploration regime. dSPN stimulation did not seem to affect decision policies.
Discussion
These results suggest that reward and environmental uncertainty modulates the decision policy to be more exploratory and the modulation reflects the drive to maintain the reward rate. Morever, amplifying striatal indirect pathway activity fundamentally shifts how animals change decision policies in response to environmental feedback, promoting a slowing of the exploration strategies that are adopted.




Figure 1. Figure1 A) DDM manifolds showing how accuracy, reaction times and reward rate change with change in DDM parameters. B) Mice show exploitative policy at stable conditions but switch to exploration during contingency changes C) High conflict pushes the behavior towards exploration regime D) iSPN stimulation imposes a slow exploration policy on mice whereas dSPN stimulation does not have a significan
Acknowledgements
JB is supported by ANR-CPJ-2024DRI00039. TV, JBad, JBah, EAY and JER are partly supported by NIH awards R01DA053014 and R01DA059993 as part of the CRCNS program. JER is partly supported by NIH award R01NS125814, also part of the CRCNS program.
References
[1] Freeze, B. S., Kravitz, A. V., Hammack, N., Berke, J. D., & Kreitzer, A. C. (2013). https://doi.org/10.1523/JNEUROSCI.1278-13.2013
[2] Geddes, C. E., Li, H., & Jin, X. (2018). https://doi.org/10.1016/j.cell.2018.06.012
[3] Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). https://doi.org/10.3389/fninf.2013.00014
[4] Nassar, M. R., Wilson, R. C., Heasly, B., & Gold, J. I. (2010). https://doi.org/10.1523/JNEUROSCI.0822-10.2010
[5] Vaghi, M. M., Luyckx, F., Sule, A., Fineberg, N. A., Robbins, T. W., & De Martino, B. (2017). https://doi.org/10.1016/j.neuron.2017.09.006
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P014: Investigating the mechanisms underpinning behavioral resilience using an extended Multi-agent Reinforcement learning model
Sunday July 6, 2025 17:20 - 19:20 CEST
P014 Investigating the mechanisms underpinning behavioral resilience using an extended Multi-agent Reinforcement learning model


Chirayush Mohanty*,Priya Gole*, Sanket Houde, Aadya Umrao, Pragathi Priyadharsini Balasubramani
Translational Neuroscience and Technology Labs, IIT Kanpur, India

*co-first authors
Email: cmohanty21@iitk.ac.in

Introduction:Reinforcement learning models of choice behavior specifically focuses on expected reinforcement based learning and decision making, and to our knowledge, the models haven’t explored well the reward maximization strategy that is controlled by energy constraints and social constraints, and if subjective policy relates to someone’s ability to adapt well during difficult times. In Particular, we asked whether participant’s risk taking, resource (intake of food energy) influence on decisions, or social conformity bias, can explain their resilience levels.
Methods:We here for the first time performed a repeated experimental design, before and after lunch period, on school kids of age 13-15 years old (N=32, males = 21) followed by computational modeling to understand the effects of risk taking ability, food energy resource modulation, and conformity with partner’s choices, in our participants. The task tested the participant’s trade off in maximization of reward magnitude versus the frequency (loss/gains) as in Balasubramani et al., (2022). We also obtained information of participant’s personality through the Big 5 questionnaire, adapted for participant’s age. We built a Multi-agent reinforcement learning (MARL) model to investigate the relationship between the meta-parameters: exploration index, social conformity bias computed based on marginal value theorem, and resource level index, in explaining the choice dynamics.
Results:We found that the extent of reward magnitude maximization of choices correlated (Spearman r=0.37, p=0.035) with resilience, and the social conformity (r = -0.27, p = 0.12) was fairly related to resilience as well. Particularly the extent of choosing the option with frequent losses negatively related to openness and extraversion (p<0.001), while the extent of choosing min expected reward with max risk related to neuroticism (p=0.001). Our MARL model was fit to capture the reward maximization and social conformity behavior, and it provided a population exploration index of 0.85± 0.12 across blocks, and a social conformity or influential bias of 0.22±0.83 (0±0.82) in the competitive (cooperative) block, respectively.
Discussion:Our MARL model finds that increased resilience in our population may be explained by two distinct patterns and were block dependent: The social bias didn’t seem to matter for relating to resilience in the cooperation block, rather the higher exploration index related to resilience levels. Whereas in the competitive block, resilience was exhibited by those who conform to other’s values and explore less, or those who do not conform with others but explore more. Furthermore, the resilience levels were positively related to the social conformity bias measures, and interestingly, we find that increase of resource availability post lunch specifically increased the extent of social bias.




Figure 1.
Acknowledgements
We are thankful to the Kendriya Vidyalaya school at IIT Kanpur, Principal R.C. Pandey, and all supporting teachers for giving us the permission and assisting us to conduct this study.

References


1.Balasubramani, P.P*., Walke, A., Grennan, G., Purpura, S., Perley, A., Ramanathan, D., Coleman, T., & Mishra, J. (2022). Simultaneous gut-brain electrophysiology shows cognition and satiety specific coupling. Sensors, 22(23), 9242. https://doi.org/10.3390/s22239242 *corresponding

2.Balasubramani, P. P*., Diaz-Delgado, J., Grennan, G., Alim, F., Zafar-Khan, M., Maric, V., ... & Mishra, J*. (2022). Distinct neural activations correlate with maximization of reward magnitude versus frequency. Cerebral Cortex, 2022;, bhac482, https://doi.org/10.1093/cercor/bhac482 *corresponding


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P015: Dynamic Causal Modelling in Probabilistic Programming Languages
Sunday July 6, 2025 17:20 - 19:20 CEST
P015 Dynamic Causal Modelling in Probabilistic Programming Languages

Nina Baldy1*, Marmaduke Woodman1, Viktor Jirsa1, Meysam Hashemi1


1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France

*Email: nina.baldy@univ-amu.fr
Introduction

Dynamic Causal Modeling (DCM) [1] is a key methodology in neuroimaging for understanding the intricate dynamics of brain activities. It imposes a statistical framework that embraces causal relationships among brain regions and their responses to experimental manipulations, such as stimulation. In this work, we perform Bayesian inference on a neurobiologically plausible model that simulates event-related potentials observed in magneto/encephalography data [2]. This translates into probabilistic inference of latent and observed states of a system described by a set of nonlinear ordinary differential equations (ODEs) and potentially correlated parameters.
Methods
Central to DCM is Bayesian model inversion, which aims to infer the posterior distribution of model parameters given the prior and observed data. Variational inference translates this into an optimization problem by approximating the posterior with a fixed-form density [3]. We consider three Gaussian approximations: the mean-field which neglects correlation between parameters, its full-rank counterpart, and the analytical Laplace. We benchmark them against state-of-the art Markov Chain Monte Carlo (MCMC): the No-U-Turn-Sampler [4]. Finally, we benchmark the efficiency of each method as implemented in several Probabilistic Programming Languages (PPLs) [5] in terms of effective sample per computational unit.

Results
Our investigation shows that model inversion in DCM extends beyond variational approximation frameworks, demonstrating the effectiveness of gradient-based MCMC. We observe close alignment between MCMC NUTS and full-rank variational in terms of posterior distributions and model comparison. Our results demonstrate significant improvements in the effective sample size per computational time unit, with PPLs showing advantages over traditional implementations. Additionally, we propose solutions to mitigate issues related to multi-modality in posterior distributions, such as initializing at the tail of the prior distribution, and weighted stacking [6] of chains for improved inference.

Discussion
Previous research on MCMC methods for Bayesian model inversion in DCM highlighted challenges with both gradient-free and gradient-based approaches [7, 8]. However, we found that the ability to combine probabilistic modeling with high-performance computational tools offers a promising solution to the challenges of high-dimensional, non-linear models in DCM. Future work should extend to whole-brain models and fMRI data, which pose additional challenges for both MCMC and variational methods.





Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project), No. 101137289 (Virtual Brain Twin Project), and government grant managed by the Agence Nationale de la Recherche reference ANR-22-PESN-0012 (France 2030 program).

References
[1]https://doi.org/10.1016/S1053-8119(03)00202-7
[2]https://doi.org/10.1016/j.neuroimage.2005.10.045
[3]https://doi.org/10.1080/01621459.2017.1285773
[4]https://doi.org/10.48550/arXiv.1111.4246
[5]https://doi.org/10.1145/2593882.2593900
[6]https://doi.org/10.48550/arXiv.2006.12335
[7]https://doi.org/10.1016/j.neuroimage.2015.03.008[8]https://doi.org/10.1016/j.neuroimage.2015.07.043


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P016: Heterogeneous topologies in in silico networks are necessary to model the emergent dynamics of human-derived fully excitatory neuronal cultures
Sunday July 6, 2025 17:20 - 19:20 CEST
P016 Heterogeneous topologies in in silico networks are necessary to model the emergent dynamics of human-derived fully excitatory neuronal cultures

Valerio Barabino*, 1, Francesca Callegari1, Sergio Martinoia1, Paolo Massobrio1, 2

1Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genova, Genova, Italy
2National Institute for Nuclear Physics (INFN), Genova, Italy


* Email:valerio.barabino@edu.unige.it

Introduction
Murine neuronal cultures have been the gold standard forin vitromodels, but their outcome is not always translatable to the human brain, especially in personalized medicine. Human-induced pluripotent stem cells (hiPSCs) offer a promising alternative [1]. This model requires extensive characterization, andin vitromulti-electrode arrays (MEAs) recordings alone may not capture all relevant parameters. Computational modeling can complement these experiments, offering insight into the mechanisms behind peculiar electrophysiological activities or pathological conditions [2]. This work aims to infer the underlying mechanisms behind the emergent firing profile pattern in excitatory hiPSC neuronal networks coupled to MEAs [3].
Methods
We modeled 100 Hodgkin-Huxley neurons with short-term depressing synapses. To reproduce self-sustained spontaneous activity observedin vitro, we introduced noise and external DC currents to allow for the alternation of two phases: short periods of high-frequency firing involving the whole network and long periods of asynchronous low-frequency spiking. We explored the role of external triggers, the interplay between synaptic conductances (AMPA and NMDA) and synaptic depression, and network topology in recreating the averagein vitrocumulative firing pattern. To account for the heterogeneity of biological networks, we introduced different connectivity rules, distinguishing between incoming and outgoing links.
Results
Noise emerged as the best trigger for network bursts, allowing a good balance between random spiking and bursting activity with anin vitro-like variability of inter-network burst intervals. Lower AMPA conductance than NMDA was necessary, as NMDA ensured a broader operability range forin vitro-like activity. The optimal trade-off between NMDA contribution and synaptic depression was found near a transition state, implying that small parameter changes can shift the system into different regimes. To shape cumulative firing pattern profiles, heterogeneous topologies were introduced, distinguishing afferent and efferent connectivity. The mostin vitro-like profile arose from scale-free afferent and random efferent connections.
Discussion
Consistent with previous studies [4], our findings suggest that the nature ofin vitrohiPSC network bursts is governed by a mechanism of noise amplification, controlled by a pulse of activity that is randomly nucleated and propagates throughout the network. Regarding connectivity, scale-free for afferents implies that a small subset of “privileged” neurons receives most of the inputs (hubs), thus acting as central regulators and influencing the network’s overall activity. Notably, these hubs exhibited more tonic firing, effectively acting as pacemakers that initiate network bursts, as similarly identified in [5]. However, in our case this property is structural and not an intrinsic dynamic property of single neurons.




Acknowledgements
The authors thank dr. Giulia Parodi (University of Genova) for supplying the hiPSCs recordings. This work was supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), Project MNESYS (PE0000006)—A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022).
References
1.https://doi.org/10.1016/j.stemcr.2021.07.001
2.https://doi.org/10.1101/2024.05.23.595522
3.https://doi.org/10.1088/1741-2552/acf78b
4.https://doi.org/10.1038/nphys2686

5.https://doi.org/10.1007/s00422-010-0366-x
Speakers
avatar for Paolo Massobrio

Paolo Massobrio

Associate Professor, Univeristy of Genova
My research activities are in the field of the neuroengineering and computational neuroscience, including both experimental and theoretical aspects. Currently, I am coordinating a research group (1 assistant professor, 2 post-docs, and 5 PhD students) working on the interplay between... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P017: Unraveling the neural mechanisms of behavior-related manifolds in a comprehensive model of primary motor cortex circuits
Sunday July 6, 2025 17:20 - 19:20 CEST
P017 Unraveling the neural mechanisms of behavior-related manifolds in a comprehensive model of primary motor cortex circuits

Roman Baravalle1*, Valery Bragin1,5, Nikita Novikov4, Wei Xu2, Eugenio Urdapilleta3, Ian Duguid2, Salvador Dura-Bernal1,4
1 Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, USA
2 Centre for Discovery Brain Sciences, University of Edinburgh, Edinburgh, UK
3 Centro Atómico Bariloche & Instituto Balseiro, Bariloche, Argentina
4 Center for Biomedical Imaging & Neuromodulation, The Nathan Kline Institute for Psychiatric Research
5 Brain Simulation Section, Charité - Universitätsmedizin Berlin, Berlin, Germany

*Corresponding Author: roman.baravalle@downstate.edu



Introduction
Accumulating evidence suggests that low-dimensional neural manifolds in the primary motor cortex (M1) play a crucial role in generating motor behavior. These latent dynamics, emerging from the collective activity of M1 neurons, are remarkably consistent across animals performing the same task. However, the specific cell types, cortical layers, and biophysical mechanisms underlying these representations remain largely unknown. Understanding these manifolds is essential for characterizing neural computations underlying behavior and has implications for developing stable and easy-to-train brain-machine interfaces (BMIs) for spinal cord injury.
Methods
We previously developed a realistic computational model of M1 circuits on NetPyNE/NEURON [1], incorporating detailed corticospinal neuron models responsible for transmitting motor commands to the spinal cord. This model was validated against in vivo spiking and local field potential data, demonstrating its ability to generate accurate predictions and provide insights into brain diseases. We further showed that M1 activity could be represented in low-dimensional manifolds, which varied according to behavioral states and experimental manipulations. These embeddings revealed clear clustering related to behavior and inactivation experiments (e.g., noradrenergic or thalamic input lesions), with high correlations between low- and high-dimensional representations.
Results
In this work, we extended the M1 model by incorporating two new interneuron types and tuning it to reproduce neural manifolds observed in vivo during a mouse joystick reaching task. Neuropixels probes recorded spiking activity in M1 and the ventrolateral thalamus, allowing us to jointly analyze neural patterns and joystick trajectories. We constructed a decoder using the CEBRA method [2] to predict movement trajectories from spiking activity and LFP and explored different model tuning strategies, including varying long-range inputs and modifying circuit connectivity via global optimization.
Discussion
Reproducing experimental behavior-related neural manifolds in large-scale cortical models enables linking neural dynamics across scales (membrane voltages, spikes, LFPs, EEG) to behavior, experimental manipulations, and disease. This approach helps refine models, characterize the relationship between latent dynamics and specific cell types, and ultimately deepen our understanding of how brain circuits generate motor behavior.
Acknowledgements
This work is supported by NIBIB U24EB028998 and NYS DOH1-C32250GG-3450000 grants


References
[1]https://doi.org/10.1016/j.celrep.2023.112574[2]https://doi.org/10.1038/s41586-023-06031-6



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P018: Orientation Bias and Abstraction in Working Memory: Evidence from Vision Models and Behaviour
Sunday July 6, 2025 17:20 - 19:20 CEST
P018 Orientation Bias and Abstraction in Working Memory: Evidence from Vision Models and Behaviour

Fabio Bauer*¹, Or Yizhar¹,², Bernhard Spitzer¹,²
¹Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development,
Berlin, Germany
²Technische Universität Dresden, Dresden, Germany*bauer@mpib-berlin.mpg.de
Introduction

Working memory (WM) for visual orientations shows behavioral bias, where remembered orientations are repelled from the cardinal axes. These canonical biases are well-documented for grating stimuli within 180° space[1-4]. WM maintenance of orientation information has been shown to involvelower-level visual processing [5-9]. However, in recent work we showed that orientation biases are also found with real-world objects in 360° space, which points to a high level of abstraction[10]. Can such abstraction and bias be explained by visual processing alone? Here, we examine if orientation biases for real-world objects emerge in computer vision models of the ventral visual stream [11,12] and compare them with behavioral reports in a WM task.
Methods
We compared activations from a range of neural network models: brain-inspired CNNs[13], established feedforward CNNs[14-16], and vision-transformers[17, 18]. Each model was shown 144 natural objects with a different principle axis (not rotationally symmetric), rotated in 16 orientations spanning 360°. We used representational similarity analysis to compare the models’ layer activations to idealized representations of bias in 180° and 360° orientation space. Results were compared with human behavioral reports from orientation WM tasks with the same kind of stimuli.
Results
Neural networks showed orientation biases in 180° space, which became stronger in deeper layers that have been suggested to model higher visual areas. In contrast, when analyzing the full 360° orientation space with natural objects, these same models showed no orientation bias at any layer. This failure across architectures reveals a fundamental limitation: while models can process orientation relationships in simple symmetric stimuli, they fail to recognize that differently shaped objects (like horizontal tables versus vertical towers) can share the same orientation. Our parallel human behavioral experiment showed that, unlike these models, people show orientation biases in working memory across the full 360° spectrum with similar natural objects.
Discussion
We found no evidence for a biased representation in 360° space in any layers of the vision models we tested. In contrast, human behavioral reports and eye-gaze patterns from WM experiments did show a clear 360° bias. This indicates that bias in our task emerges at the level of an abstracted stimulus feature (the object’s orientation relative to its real-life upright position), rather than low-level visual features. Our findings also suggest that with such real-world objects requiring abstraction, 360° orientation information is not represented in these most current models of visual processing. Future work should focus on validating these exploratory findings experimentally.



Acknowledgements
We acknowledge the Max Planck Institute for Human Development for providing computing resources and facilities. We also thank the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research (IMPRS COMP2PSYCH) for funding support. Additionally, we appreciate helpful discussions and comments from Felix Broehl and Ines Pont Sanchis.
References

1doi.org/10.1037/h0033117
2doi.org/10.1038/nn.2831
3doi.org/10.1167/10.10.6
4doi.org/10.1016/j.visres.2009.12.005
5doi.org/10.1038/nature07832
6doi.org/10.7554/eLife.94191.3
7doi.org/10.1371/journal.pbio.3001711
8doi.org/10.1080/13506285.2021.1915902
9doi.org/10.1101/2023.05.18.541327
10doi.org/10.1038/s41562-023-01737-z
11doi.org/10.1073/pnas.1403112111
12doi.org/10.1038/s41467-024-53147-y
13doi.org/10.1101/408385
14doi.org/10.1145/3065386
15doi.org/10.1109/cvpr.2017.634
16doi.org/10.48550/ARXIV.1409.155617doi.org/0.48550/ARXIV.2112.127501

18doi.org/10.48550/arXiv.2010.11929







Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P019: Distinguishing spatiotemporal scales within a connectome reveals integration and segregation efficiency in global patterns of neuronal activity
Sunday July 6, 2025 17:20 - 19:20 CEST
P019 Distinguishing spatiotemporal scales within a connectome reveals integration and segregation efficiency in global patterns of neuronal activity

Diego Becerra*1,2, Ignacio Ampuero1,2, Pedro Mediano3, Christopher Connor4, Andrea Calixto2, & Patricio Orio1,2

1 Valparaíso Neural Dynamics Laboratory, Faculty of Sciences, Universidad de Valparaíso, Chile
2 Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Chile
3 Department of Computing, Imperial College London, United Kingdom
4 Brigham & Women’s Hospital, Harvard Medical School, Boston, USA.


* Email: becerra.q.diego@gmail.com
Introduction

Neurons in the brain communicate in different ways, and thus connectomes can be conceived as overlapping dissimilar networks depending on the type of signal being transmitted. One crucial difference between modes of signalling is given by the connectivity timescales, yielding four layers of paths (ordered from fastest to slowest): gap junctions, amino acid, monoaminergic, and peptidergic transmitters.Caenorhabditis elegans, a 302-neuron nematode, is an excellent model for exploring both topological and functional properties of the interaction between layers of networks, because its full connectome is known, alongside a lot of its electrophysiology.

Methods
A full structural connectome ofC. eleganswas built from the latest empirical works available. Functional connectomes were built from twoC. elegans‘whole-brain’ calcium imaging datasets of global states: npr-1mutants undergoing quiescence and wakefulness [2]; and QW1217 mutants undergoing 4% and 8% of isoflurane anesthesia [1].
We analyzed integration and segregation applying network topology measures to the datasets and to the connectome layers. Also, we developed a partial network decomposition [PND] algorithm which analyzes the shortest path between nodes of a pair of overlapping networks. We then compared the network properties of theC. elegansconnectomes with lattiziced and randomized surrogates.
Results
While the peptidergic connectome is dense, the others are sparse. Applying PND, we determined if a path between nodes is redundant, uniquely contributed, or synergistic for a pair of spatiotemporal scales. Unique paths are predominant in all pairs of scales, the highest redundancy is found between electrical and amino acid transmission, and the highest synergy is between electrical and monoaminergic.

Empirical pairs of connectomes are more synergistic than their latticized or randomized surrogates, suggesting that the empirical network yields an improvement in efficiency. Comparing segregation and integration between structural (SC) and functional (FC) connectivity, FC of asleep and anesthetized worms is closer to SC than the FC of awake worms.
Discussion
We were able to characterize complementary (synergistic), redundant, and unique paths between nodes of the connectomes. Yet, the recent integration of gene-expression datasets and ligand-receptor interaction shows a pervasive extra-synaptic transmission network. To discover the effect of differences in connectivity density between peptidergic and the other scales of neurotransmission require thus including the temporal dimension: both by using empirical ‘whole-brain’ datasets portraying different global states (wakefulness, sleep, anesthesia) and a mathematical model using the full topology of the network, which is in process of being implemented.



Figure 1. Fig. 1. (A) Center: Shortest paths of the empirical connectome layers favor synergy as distance between nodes increase, when compared to latticized (left) and randomized (right) versions. Binarizing top Pearson correlations of awake vs. asleep (B); and anesthetized vs. awake (C) timeseries show that structural segregation (left cols.) and integration (right cols.) values are closer to asleep ones.
Acknowledgements
Fondo Nacional de Desarrollo Científico y Tecnológico (FONDECYTPatricio Orio grant number 1241469; ANID-Basal: Patricio Orio grant number AFB240002; ANID-Doctoral Fellowship: Diego Becerra 21210914.
References
1. Chang, A. S., Wirak, G. S., Li, D., Gabel, C. V., & Connor, C. W. (2023).Measures of Information Content during Anesthesia and Emergence in the Caenorhabditis elegans Nervous System.Anesthesiology, 139(1), 49–62.https://doi.org/10.1097/ALN.0000000000004579
2. Nichols, A. L. A., Eichler, T., Latham, R., & Zimmer, M. (2017).A global brain state underlies C. Elegans sleep behavior.Science, 356(6344), 1247–1256.https://doi.org/10.1126/science.aam6851


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P020: A Multi-Compartment Computational Approach to Cerebellar Circuit Dysfunction in Autism
Sunday July 6, 2025 17:20 - 19:20 CEST
P020 A Multi-Compartment Computational Approach to Cerebellar Circuit Dysfunction in Autism

Danilo Benozzo*1, Martina F. Rizza1, Danila Di Domenico1, Giorgia Pellavio1, Filippo Marchetti1, Egidio D’Angelo1,2, Claudia Casellato1

¹ Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
² Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: danilo.benozzo@unipv.it
Introduction

Modeling brain dynamics requires addressing processes that span different temporal and spatial scales [1]. This is crucial, when studying phenomena at the network level that are consequences of pharmacological or pathological alterations occurring at the single-cell level, such as changes in ionic or synaptic currents. Our aim is to study how single-cell dynamics affect circuit dynamics in a mouse model of autism (IB2 knock-out, KO), within the context of the cerebellar cortical microcircuit. Cerebellar implications in autism spectrum disorders (ASD) have been well-documented, showing a dependent association between cerebellar damages and an increased risk for ASD [2].
Methods
We re-parameterized a wild-type (WT) granule cell (GrC) multi-compartment model [3] to match the empirical properties of IB2-KO GrCs [4]. At the network level, we employed a bottom-up approach, by placing all the cell types that characterize the cerebellar cortex, preserving their physiological morphology, density and connection affinity. On the simulation side, the activity of each cell type was reproduced through a multi-compartment model interfaced with the NEURON simulator [5]. The entire process was managed by the Brain Scaffold Builder framework [6,7].
Results
In the WT GrC model we increased Na and K maximum conductances (gmax) to match the higher in/outward currents in IB2 GrCs. Tonic glutamate levels [glu] in mossy-fiber-GrC synapses and NMDA gmax were adjusted to replicate experimental I-f and NMDA currents, predicting [glu] at 11.2 µM and a 4x NMDA gmax increase. The IB2 GrC model was integrated into the canonical cerebellar circuit, assuming no other cell changes (empirical IB2 Purkinje cell (PC) spks/s = 51.8, std 11.7, no sign. vs. WT). Network comparisons revealed greater stimulus spread through the Gr-layer, Fig.1B. Fig.1C shows peri-stimulus histograms for both circuits under different input, predicting an overall firing increase (rates from 9.5x in GrCs to 1.6x in PCs).
Discussion
This bottom-up modelling framework enabled us to construct a representative microcircuit of the mouse cerebellar cortex, featuring a granular layer that replicates the alterations empirically observed in the IB2-KO model. This multiscale approach allows us to predict how the circuit dynamics respond to single-cell model modifications. In the granular layer, our results reflect the spatially expanded, higher E/I balance around IB2-KO GrCs observed in [4]. To further validate whole-circuit activity, we are currently comparing our predictions with in vitro MEA recordings from both WT and IB2-KO mice, in spontaneous regime and under mossy-fiber impulse stimulations.



Figure 1. A: Effect of NMDA gmax​ (kgmax​​ NMDA) and ambient glutamate ([glu]) on NMDA currents in IB2-KO GrCs. B: How a 20 Hz stim propagates through the granular layer, applied to mossy fibers (mfs) within the circular target, r=40 µm. C: Firing rates of each cell type under three conditions: no stimulus, an 8 Hz Poisson basal input to all mfs, and basal input plus high-frequency stim targeted to 15 mfs.
Acknowledgements
Work supported by NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) – A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022)
References
[1]https://doi.org/10.1162/netn_a_00343
[2]https://doi.org/10.1016/j.ijdevneu.2004.09.006
[3]https://doi.org/10.3389/fncel.2017.00071
[4]https://doi.org/10.1523/jneurosci.1985-18.2019
[5]https://doi.org/10.1017/cbo9780511541612
[6]https://doi.org/10.1038/s42003-022-04213-y[7]https://ebrains.eu/service/brain-scaffold-builder/


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P021: Studying the effects of TMS on the cerebellum with a realistic model of the cerebellar cortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P021 Studying the effects of TMS on the cerebellum with a realistic model of the cerebellar cortex

Eleonora Bernasconi*1, Nada Yousif1, Volker Steuber1

1Biocomputation Research Group, University of Herfordshire, Hatfield, United Kingdom

*Email: e.bernasconi@herts.ac.uk
Introduction

Transcranial magnetic stimulation (TMS) has been used for over 30 years to modulate cortical excitability and it is currently being applied to other brain regions, such as the cerebellum[1,2]. TMS is a promising technique that could be beneficial for people suffering with dystonia, essential tremor, and Parkinson’s disease[1,2]. However, research in this field provides contrasting evidence of the effects of TMS on the cerebellum[1,3]. Our goal is to study the underlying mechanisms of TMS on the cerebellum via a computational approach.

Methods
We stimulated a previously developed model of the cerebellar cortex consisting of granule, Golgi and Purkinje cells[4]. To ensure uniform stimulus application, we replaced the granule cells with a multi-compartmental model by Diwakar et al[5]. We applied the stimulus as a voltage using the extracellular mechanism in NEURON [6], which requires multi-compartmental models. We stimulated all compartments of all neurons with a sinusoidal waveform, where field decay is only dependent on the distance between the source of the applied electric field (located at the origin) and the stimulated compartment[7]. We tested 6 stimulus frequencies commonly used in TMS protocols on the cerebellum: 1, 5, 10, 20 and 50 Hz.
Results
For stimulus frequencies up to 20 Hz, the firing rate of the Purkinje cell oscillates in response to the sinusoidal stimulus, as expected (Figure 1A-C). During the positive phase of the stimulus, the cell’s soma hyperpolarizes, while during the negative phase, it depolarizes.
Increasing the stimulus frequency up to 20 Hz amplifies the modulation. The variance of the cell’s instantaneous firing rate is 0.4, 5.9, 18.9, 39.0 and 29.6 Hz2for stimulus frequencies of 1, 5, 10, 20 and 50 Hz.
At 50 Hz, the cell’s instantaneous firing rate no longer follows the stimulus waveform, and instead exhibits a pronounced excitation with weaker inhibition (Figure 1D). This excitation is much stronger than that obtained at lower stimulus frequencies.
Discussion
The behaviour of our Purkinje cell model aligns with the findings of Rattay et al [8], suggesting that our model can serve as a useful tool to study how TMS influences cerebellar activity.

We show that stimulus frequency can significantly impact the cell’s behaviour, highlighting the importance of carefully selecting this parameter in clinical settings. High-frequency stimulation exerts a strong excitatory influence, which may have important implications for therapeutic use.Future work will extend the simulation model to the granule and Golgi cells. We plan to stimulate the network with a more realistic electric field generated using realistic anatomical head models. We will derive the electric field distribution employing SimNIBS[9].



Figure 1. Figure 1: Instantaneous firing rate of the Purkinje cell (in blue) and waveform used to stimulate the cell (in orange). The amplitude of the pulse waveform is not to scale. The stimulus applied has a frequency of 5, 10, 20 and 50 Hz in figures A, B, C and D.
Acknowledgements
-
References
[1]https://doi.org/10.1016/j.brs.2017.11.015
[2]https://doi.org/10.1016/j.neubiorev.2017.10.006
[3]https://doi.org/10.3389/fnagi.2020.578339
[4]https://doi.org/10.1007/s10827-024-00871-5
[5]https://doi.org/10.1017/10.1152/jn.90382.2008
[6]https://doi.org/10.1017/CBO9780511541612
[7]https://doi.org/10.1017/10.1007/978-3-031-08443-0_8
[8]https://doi.org/10.1109/TBME.1986.325670
[9]https://doi.org/10.1007/978-3-030-21293-3_1




Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P022: Pattern mismatch detection by transient EI imbalance on single neurons: Experiments and multiscale models.
Sunday July 6, 2025 17:20 - 19:20 CEST
P022 Pattern mismatch detection by transient EI imbalance on single neurons: Experiments and multiscale models.

Authors:Aditya Asopa1andUpinder S. Bhalla1*


1NCBS-TIFR, Bangalore, India


*Email: bhalla@ncbs.res.in
Introduction

Changes in repetitive stimuli typically signal important sensory events, and many organisms exhibit mismatch detection through behavioral and physiological responses.Mismatch detection is a fundamental sensory and computational function, bringing attention and neural resources to bear on novel inputs. Previous work[1,2] suggests that sensory adaptation mediated by short-term plasticity (STP) may be a mechanism for mismatch detection, however this does not factor in details of excitatory-inhibitory (EI) balance, network connectivity, and time-courses of E and I inputs.

Methods
We performed optogenetic stimulation of CA3 pyramidal neurons in acute mouse hippocampal brain slice to provide precise spatial and temporal patterns of activity as proxies for input ensembles. We monitored E and I synaptic input in postsynaptic CA1 pyramidal neurons using voltage clamp at I and E reversal potentials to separate the respective contributions. We used time and space patterns to parameterize a multiscale model of CA3 neurons, interneurons, and hundreds of synaptic boutons with independent stochastic chemical signaling controlling synaptic release onto a postsynaptic CA1 neuron. Simulations were performed using the MOOSE simulator[3].
Results
We parameterized the model in three stages. First, we built a Ca2+-triggered 4-step presynaptic release model which (with different parameters) could be applied both to E and I synapses using voltage-clamp recordings over a burst. Second, we fit CA1 neuronal and synaptic properties to burst synaptic input at different frequencies. Third, we fit CA1 readouts of Poisson trains of optical patterned input at CA3, to constrain network parameters. This model predicted that transitions in spatially patterned input sequences, such as AAAABBBBCCCC, could be detected by the network. We confirmed this experimentally. Finally, we showed that spiking CA1 neurons had even sharper mismatch tuning and could detect pattern transitions between theta bursts.
Discussion
EI balance controls neuronal excitability both across time-scales, and across strength and patterns of input[4]. To this we add the dimension of plasticity at short-time-scales (~100 ms) relevant for mismatch detection[1] and sensory sampling coupled to the theta rhythm. We provide an experimentally tuned open-sourced resource of a CA3-CA1 model of input-output relationships down to the molecular level, which is lightweight enough to run on a laptop at only ~20x real time. We propose that a transient tilt in EI balance is a more nuanced, biochemically and biophysically based mechanism for mismatch detection, and accounts for numerous observations of timing, intensity, and circuit configurations.




Acknowledgements
AA and USB are at NCBS-TIFR which receives the support of the Department of Atomic
Energy, Government of India, under Project Identification No. RTI 4006. The study received funding from SERB Grant CRG/2022/003135-G.
References
1: https://doi.org/10.1016/j.clinph.2008.11.029
2: https://doi.org/10.1111/j.1469-8986.2005.00256.x
3: https://doi.org/10.3389/neuro.11.006.2008
4: https://doi.org/10.7554/eLife.43415
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P023: Applying a Machine Learning Method to Detect Miniature EPSCs
Sunday July 6, 2025 17:20 - 19:20 CEST
P023 Applying a Machine Learning Method to Detect Miniature EPSCs

Krishan Bhalsod*1, Cengiz Günay1


1Dept. Information Technology, Georgia Gwinnett College, Lawrenceville, Georgia, USA

*kbhalsod@ggc.edu
Introduction

This study aims to use machine learning to find miniature excitatory postsynaptic currents (EPSCs) in neurons
of aDrosophilato find behavior markers of a seizure. Using MATLAB, we are training a machine learning model
on electrophysiological data to recognize patterns of postsynaptic events that show potential seizure activity. We have faced challenges applying this method and we are planning to present these in our poster. The results of this research may help develop a further understanding of seizure mechanisms inDrosophilathat could translate into a more in-depth understanding for neurological disorders in humans.

Methods
The particular type of data we are addressing is obtained from intracellular recordings of invertebrate neurons, specifically fromDrosophila(fruit fly) motor neurons [1]. Not only these recordings have low SNR, but also the miniature excitatory postsynaptic current (EPSC) (or ”mini”) event we are looking for come in various magnitudes due to the distance from the event’s origin on the neuron’s morphology. In the present work, our aim is to adapt a novel machine learning and optimal filtering method (MOD) to automatically detect these minis [2].
Results
The purpose of MOD is to generate a filter that takes the original data and it removes any noise, turning it into a
raw detection trace that closely mirrors the manual scoring trace. The method leverages the Wiener-Hopf equations to derive an optimal filter for detecting post-synaptic events. In the MATLAB code, the optimal-filter equations are directly implemented. First, the program estimates the auto-correlation and cross-correlation from the training data to build a Toeplitz matrix Ry, and then it solves for the filter coefficients a. To correct for any timing differences between the recorded signal and the manual scoring, the algorithm computes filter coefficients for several time shifts and selects the delay that yields the best detection performance (e.g., highest AUC). Finally, a low-pass Hann window filter is applied to smooth the detection trace.
Discussion
The challenges of machine learning come with filtering noise. Typically, in electrophysiological recordings, the lines are not smooth. Therefore, we applied a bandpass filter of 1-1,000 Hz to reduce the noise. However, we face a problem where the signal oscillates and eventually forms flat lines, which is most likely caused by the filtering algorithm removing low-magnitude events. Because of this, the machine learning model faces difficulties learning from the filtered signal, thus failing to recognize events because the threshold is no longer high enough to flag the event.



Figure 1. Example of recording where blue shaded areas highlight mini events. Time units in seconds. Y-axis units in pA.
Acknowledgements
The recordings used in this work were provided by Richard Baines from University of Manchester. We are grateful for providing student travel funding to Dr. Joseph Ametepe, Dean of the School of Science and Technology, and Dr. Sean Yang, Chair of the Information Technology Department at Georgia Gwinnett College. Students Jonathan Tran and Niecia Say provided valuable feedback for this project.
References
1. C. N. G. Giachello and R. A. Baines. Inappropriate neural activity during a sensitive period
in embryogenesis results in persistent seizure-like behavior. Curr Biol, 25(22):2964–2968, Nov 2015. doi: 10.1016/j.cub.2015.09.040.
2. X. Zhang, A. Schlögl, D. Vandael, and P. Jonas. MOD: A novel machine-learning optimal-filtering method
for accurate and efficient detection of subthreshold synaptic events in vivo. Journal of Neuroscience Methods, 357:109125, 2021. ISSN 0165-0270. doi: https://doi.org/10.1016/j.jneumeth.2021.109125.

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P024: Analysis of autoassociative dynamics in the hippocampus through a full-scale CA3-CA1 model
Sunday July 6, 2025 17:20 - 19:20 CEST
P024 Analysis of autoassociative dynamics in the hippocampus through a full-scale CA3-CA1 model

Giulia M. Boiani1,2*, Serena Giberti2, Lorenzo Tartarini3, Giampiero Bardella4, Sergio Solinas5, Stefano Ferraina4, Michele Migliore2, Jonathan Mapelli3, Daniela Gandolfi1


1Dipartimento di Ingegneria "Enzo Ferrari", Università degli Studi di Modena e Reggio Emilia, Italy
2CNR, Istituto di Biofisica, Palermo, Italy
3Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze, Università degli Studi di Modena e Reggio Emilia, Italy
4Dipartimento di Fisiologia e Farmacologia, Sapienza, Università di Roma, Roma, Italy
5Dipartimento di Scienze Biomediche, Università di Sassari, Sassari, Italy


*Email: giuliamaria.boiani@unimore.it

IntroductionThe hippocampus is a key brain structure for memory formation and spatial navigation. We present a full-scale point-neuron realistic model of the mouse CA3-CA1 network [1]. The structural validity of the network has been assessed by applying graph theory, whereas functional validation has been performed by incorporating a parameterized point-neuron and a custom-developed synapse with short- and long-term synaptic plasticity. We demonstrated the ability of the modeled CA3 to operate as an autoassociative network that can reconstruct complete memories from partial clues [2]. These results confirm the role of CA3 in pattern completion and provide a benchmark to investigate information processing in the hippocampal formation.Methods
The network connectivity has been obtained by adopting a morpho-anatomical strategy based i) on the intersection of abstract geometrical morphologies and ii) by extending the tubular structures of CA3 pyramidal cell (PC) axons to target CA1 PCs while accommodating the hippocampal anatomy [3]. The custom-developed synapse was implemented through NESTML and included short-term dynamics and long-term STDP[4]. The autoassociativity was investigated by applying a theta-gamma stimulation protocol to train a subset of 400 out of 4000 CA3 PCs. The network’s tendency to balance local interconnectedness (clustering) and efficient information routing (short path lengths) was assessed using a key graph-theoretic metric: the small-world coefficient.
Results
The CA3-CA1 network (Fig 1A-B) showed an outdegree (Fig 1C) distribution compatible with experiments. Interestingly, CA1 and CA3 exhibited distinct connectivity profiles: a hub-like organization potentially facilitating the integration of information in the CA1, and a nearly fully connected hub-less architecture in the CA3 consistent with its role in pattern completion. Moreover,CA3 showed a high clustering coefficient (Fig 1D), while both regions exhibited small-world properties, with CA3 having a higher value (Fig 1E).Autoassociativity test showed that CA3 (Fig 1F) can indeed retrieve complete memories upon presentation of degraded inputs and complete retrieval occurred when at least 20% of trained neurons were stimulated (Fig 1G).
Discussion
These results validate the accuracy of the model. The network can perform pattern completion effectively exploiting autoassociativity. The analysis of the network's topology suggests that CA1 acts as a hub-like connector, while CA3 shows signatures of small-worldness with an efficient architecture balancing local segregation and global reach. Our biologically realistic network exhibits a non-trivial topology allowing the emergence of functional properties, which could be altered in pathological conditions together with topology [5]. These results offer insights into the functions of hippocampal circuitry, paving the way to the use of computational models to investigate physiological and pathological conditions.



Figure 1. A CA3-CA1 scaffold B Simulation activity snapshots C CA3 and CA1 outdegree distributions. D Clustering coefficients of CA3 and CA1 networks compared to equivalent Erdős–Rényi (ER) and Watts–Strogatz (SW) null models. E Small-World Coefficients. F Neuronal activation over time in recall tests with varying fractions of stimulated neurons. G Evaluation of recall performance.
Acknowledgements
The University of Modena and Reggio Emilia FAR-DIP-2024-GANDOLFI E93C24000500005 to DG. The Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union – NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy” and Project PE0000006, “MNESYS” to JM), the Ministry of University and Research, PRIN 2022ZY5RXB CUP E53D2301172 to JM.
References
[1]https://doi.org/10.1038/s41598-022-18024-y


[2]https://doi.org/10.1007/s10827-024-00881-3


[3]https://doi.org/10.1038/s43588-023-00417-2


[4]https://doi.org/10.1038/ncomms11552


[5]https://doi.org/10.1016/j.clinph.2006.12.002
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P025: The opportunities and limitations of AI as a tool in neuroscience: how does the nose know what it knows?
Sunday July 6, 2025 17:20 - 19:20 CEST
P025 The opportunities and limitations of AI as a tool in neuroscience: how does the nose know what it knows?

James M. Bower
Biome Farms, Veneta Oregon
Introduction

There has been a dramatic increase in the use of AI in the analysis of neurobiological data.For example, recently a graphical neural network trained to predict odor percepts from molecular structures has suggested that olfactory discrimination may be based on the metabolic relationships between molecules rather than their physio-chemical structures (Qian et al., 2023).Although these authors were unaware, Chris Chee in my laboratory had discovered the same result 25 years earlier using a different kind of analysis of olfactory perception (Ruiter-Chee, 2000).This talk will describe each approach, the neurobiological significance of the results, then considering the value and limitations of AI as an abstract data analysis tool.


Methods
In the first study, a graphical neural network constructed a map from molecular structure to odor descriptors, the results tested comparing model results to those of human experts (Lee et al., 2023). Chemical relationships within the AI produced odor map were then examined (Qian et al., 2023).The second approach used a cross-entropy analysis of the co-occurrence of individual descriptors in the human identified profiles of 822 molecules.The resulting directed graph was then analyzed for the locations of odorants containing nitrogen and sulfur (Chee-Ruiter, 2000).




Results
Both studies suggest that human olfactory discrimination reflects the metabolic relationships between molecules rather than their strict physio-chemical properties.Metabolically related but structurally dissimilar molecules were grouped together in the AI generated map, while molecules containing sulfur and nitrogen co-localized in the directed graph.While both studies reached similar conclusions, the cross-entropy analysis lead directly to further studies of the binding properties of olfactory receptors as well as realistic modeling studies of the organization of efferent and intrinsic pathways within olfactory cortex, both suggesting that the olfactory system intrinsically “knows” about the metabolic structure of the world.

Discussion
The results of both studies suggest that the assumption, first proposed by the Roman poet and philosopher Lucretius who in 50 B.C.E, that the olfactory system recognizes and categorizes odorant molecules based on their general physio-chemical properties is fundamentally wrong. Accordingly at a minimum the physio-chemically organized (i.e. carbon length chain) panels of odor stimuli traditionally used in olfactory experiments are unlikely to reveal how the olfactory system works. Beyond that, however, the additional studies conducted in our laboratory call into question whether the neurobiological basis for olfactory discrimination, for example, is learned or intrinsic, a question that cannot be addressed by the AI model.



Acknowledgements
d
I acknowledge the alpacas, emus, and horses that watch me very day as I work on my books and papers. Otherwise, I am completely self funded, as I am simulating an 18th century landed gentry scientist.
References
Bailey, C., (1959)Lucreti De Rerum Natura Libri Sex,2nd edition (Oxford Press)

Chee- Ruiter, CWJ. 2000. The biological sense of smell: olfactory search behavior and a metabolic view for
olfactory perception. Dissertation (Ph.D.), California Institute of Technology

Lee, B.K. et al. (2023) A principal odor map unifies diverse tasks in olfactory perception. Science 381: 999.

Qian, W.W, et al. (2023) Metabilic activity organizes olfactory representations Elife 2023;12:e82502
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P026: Prospective and retrospective coding in cortical neurons
Sunday July 6, 2025 17:20 - 19:20 CEST
P026 Prospective and retrospective coding in cortical neurons

Simon Brandt1, Paul Haider*,1, Mihai A. Petrovici1, Walter Senn1, Katharina A. Wilmesa,1, Federico Beniteza,1

1Department of Physiology, University of Bern, Switzerland
ashared senior authorship

*Email: paul.haider@unibe.ch

Introduction
Brains can process sensory information from different modalities at astonishing speed; this is surprising as already the integration of inputs through the membrane causes a delayed response Fig. 1d. Experiments reveal a possible explanation for the fast processing, showing that neurons can advance their output firing rate with respect to their input current, a concept which we refer to as prospective coding [1]. The combination of retrospective (a delayed response) and prospective coding enables neurons to perform temporal processing. While retrospective coding emerges from the inherent delays of neurons, the underlying mechanisms of prospective coding, however, are not completely understood.

Methods
In this work, we elucidate cellular mechanisms which can explain prospective firing in cortical neurons. We use simulation-based inference to investigate the parameters of the Hodgkin-Huxley model [2] with respect to its ability to fire prospective and retrospective. Based on this analysis, we derive a reduced model that allows us manipulate the temporal response of the Hodgkin-Huxley neurons. Further on, we derive rate based models of neurons which include adaption processes on arbitrary time scales to investigate advances on longer time scales.


Results
We show that the spike generation mechanism can be the source for the prospective (advanced) or retrospective (delayed) response as shown for prospective firing (Fig. 1a) in cortical-like neurons [3,4] (Fig. 1b, green) and retrospective firing (Fig. 1c) in hippocampal-like neurons [5,6] (Fig. 1b, orange). Further, we analyse the Hodgkin-Huxley dynamics to derive a reduced model to manipulate the timing of the neuron’s output by tuning three parameters (Fig. 1d-h). We further show that slow adaptation processes, such as spike-frequency adaptation or deactivating dendritic currents, can generate prospective firing for inputs that undergo slow temporal modulations. In general, we show that adaptation processes at different time scales can cause advanced neuronal responses to time-varying inputs that are modulated on the corresponding time scales.


Discussion
The results of this work contribute to the understanding of how fast processing (prospective coding) and short-term memory (retrospective coding) can be achieved in the brain on the level of single neurons and might guide further experiments. Prospectivity and retrospectivity may be important for several cognitive functions. The interplay of the two provides a powerful framework for temporal processing by shifting signals in time. The insights are highly beneficial for biologically plausible learning algorithms used for temporal processing and their implementation on neuromorphic hardware [7-9].
Figure 1. (a) Hodgkin-Huxley neurons can be prospective for cortical neurons (b, green) and retrospective (c) for parameters fitted to hippocampal neurons (b, orange). (d) Because a neuron integrates input through its membrane, a response of the neuron is expected to be delayed by the membrane. If the output of a neuron can be advanced with respect to its input, a prospective mechanism needs to exist. With
Acknowledgements
We would like to express particular gratitude for the ongoing support from the Manfred Stärk Foundation. Our work has greatly benefited from access to the Fenix Infrastructure resources, which are partially funded through the ICEI project under the grant agreement No. 800858. This includes access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland.
References

1.https://doi.org/10.1093/cercor/bhm235
2.https://doi.org/10.1113/jphysiol.1952.sp004764
3. https://doi.org/10.1017/CBO9781107447615
4. https://doi.org/10.1016/0896-6273(95)90020-9
5.https://doi.org/10.1007/s10827-007-0038-6
6.https://doi.org/10.1017/CBO9780511895401
7.https://doi.org/10.7554/elife.89674
8.https://doi.org/10.48550/arXiv.2110.14549
9. https://doi.org/10.48550/arXiv.2403.16933


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P027: A functional network model for body column neural connectivity in Hydra
Sunday July 6, 2025 17:20 - 19:20 CEST
P027 A functional network model for body column neural connectivity in Hydra

Wilhelm Braun*1,2, Sebastian Jenderny4, Christoph Giez5,6, Dijana Pavleska5, Alexander Klimovich5,

Thomas C.G. Bosch5, Karlheinz Ochs4, Philipp Hövel7, Claus C. Hilgetag1,3


1Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany


2 Faculty of Engineering, Department of Electrical and Information Engineering, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany


3Department of Health Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA, 02215, USA


4Chair of Digital Communication Systems, Ruhr-Universität Bochum, Universitätsstraße 150, 44801, Bochum, North Rhine-Westphalia, Germany


5Zoological Institute, University of Kiel, Christian-Albrechts-Platz 4, 24118 Kiel, Germany


6The Francis Crick Institute, London NW1 1BF, UK


7Theoretical Physics and Center for Biophysics, Saarland University, Campus E2 6, Saarbrücken, 66123, Germany


*Email: wilhelm_braun@icloud.com



Introduction

Hydrais a non-senescent animal with a relatively small number of cell types and overall low structural complexity, but a surprisingly rich behavioral repertoire. The main drivers ofHydra’s behavior are neurons that are arranged in two nerve nets comprising several distinct neuronal populations. Among these populations is the ectodermal nerve net N3 which is located throughout the animal. It has been shown that N3 is necessary and sufficient for the complex behavior of somersaulting [1] and is also involved inHydrafeeding behavior [2, 3]. Despite being a behavioral jack-of-all-trades, there is insufficient knowledge on the coupling structure of neurons in N3, its connectome, and its role in activity propagation and function.


Methods
We construct a model connectome for the part of N3 located on the body column by using pairwise distance- and connection angle-dependent connectivity rules. Using experimental data on the placement of neuronal somata and the spatial dimensions of the body column, we design a generative network model combining non-random placement of neuronal somata and the preferred orientation of primary neurites. Additionally, we study activity propagation in N3 using the simple excitable Susceptible-Excited-Refractory (SER) model and a more complex neuromorphic Morris-Lecar model.


Results
We show [4] that the generative network model yields good agreement with experimental data. We then show that the simple excitable dynamical SER model generates directed, short-lived, fast propagating patterns of activity. In addition, by slightly changing the parameters of the dynamical model, the same structural network can also generate persistent activity. Finally, we use a neuromorphic circuit based on the Morris-Lecar model to show that the same structural connectome can, in addition to through-conductance with biologically plausible time scales, also host a dynamical pattern related to the complex behavioral pattern of somersaulting.


Discussion
Our work provides a systematic construction of the structure of a subnetwork ofHydra’snervous system. By assuming that the ectodermal body column network inHydrais essentially two-dimensional, we designed a generative network model that is in agreement with measured
structural quantities and supports two different activity modes, each presumably controlling
different types of behavior inHydra. We speculate that such different dynamical regimes act as dynamical substrates for the different functional roles of N3, allowingHydrato exhibit behavioral complexity with a relatively simple nervous system that does not possess modules or hubs.






Acknowledgements
WB would like to thank Kayson Fakhar, Fatemeh Hadaeghi and Mariia Popova for helpful discussions. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 434434223 – SFB 1461.

References
[1]10.1016/j.cub.2023.03.047
[2]10.1016/j.cub.2023.10.038
[3]10.1016/j.celrep.2024.114210
[4]10.1101/2024.06.25.600563

Speakers
avatar for Wilhelm Braun

Wilhelm Braun

Junior Research Group leader, CAU Kiel, Department of Electrical and Information Engineering
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P028: The Topological Significance of Functional Connectivity State Transitions
Sunday July 6, 2025 17:20 - 19:20 CEST
P028 The Topological Significance of Functional Connectivity State Transitions

Spencer Brown¹, Celine Zalamea², DanielSelski, PhD3

¹ College of Osteopathic Medicine, Pacific Northwest University of Health Sciences, Yakima, United States of America
² College of Osteopathic Medicine, Pacific Northwest University of Health Sciences, Yakima, United States of America
³College of Osteopathic Medicine, Pacific Northwest University of Health Sciences, Yakima, United States of America

Email:smbrown@pnwu.edu

Introduction

Pathology in dynamic functional connectivity is well documented but lacks explanatory rationale.Dynamicfunctional connectivityparallels the existence of discrete state transitions,the study ofwhich may provide further elucidation. In this study, weutilizedTopological Data Analysis (TDA) toobservethe shape of whole brain networks and the transitions between them. Further, we aim to understand thestructure-function relationship of brain states with neuronal physiology.
Methods
fMRI scans were obtainedand preprocessedfrom the Human Connectome Project motor task dataset.States were definedrelativeto an anticipatory visual cue. We initially used the Euclidian distance (L2) toclusterstateshierarchically.We converted brain states to Vietoris-Rips complexestoidentifytopology.Distances between these complexes were then measured using the Wasserstein distanceandaggregatedusinghierarchicalclustering.
Results
Wedemonstratethateach L2 state label mustonly havea single topology, but the same topology may exist in multiple states.Under this assumption, many combinations of L2 and topology wereobservedto be invalid.Thisisreconciled by anintrinsichierarchy of brain states.
Discussion
Weobservethatbrainstates maybedrastically different networks but share the same topology. For instance,resting versus task states mayexhibitthesame topology.In contrast, similar states may also differ in topology.We findthat topology may have a unique role in neuronal physiology andprovidesa potential framework for further studies that explorebrain dynamics.



Acknowledgements
N/A.
References
N/A.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P029: Spiking Neural Networks for Controlling a Biomechanically Realistic Arm Model
Sunday July 6, 2025 17:20 - 19:20 CEST
P029 Spiking Neural Networks for Controlling a Biomechanically Realistic Arm Model

Philip Bröhl*1,2, Junji Ito2, Ira Assent1,3, Sonja Grün2,4,5

1Institute for Advanced Simulation (IAS-8), Research Center Juelich, 52425 Jülich, Germany
2 Institute for Advanced Simulation (IAS-6), Research Center Juelich, 52425 Jülich, Germany
3Department of Computer Science, Aarhus University, 8200 Aarhus N, Denmark
4JARA Institute Brain Structure Function Relationship, Research Center Juelich, 52425 Jülich, Germany
5 Theoretical Systems Neurobiology, RWTH Aachen Univ., Aachen, Germany

*Email: p.broehl@fz-juelich.de
Introduction


A typical feature of neurons in the motor cortex of mammalian brains is that they are tuned to a particular direction of movements, i.e. they exhibit most spikes when a body part is moved in a particular direction, called preferred direction (PD). It has been reported that the distribution of preferred directions among motor cortex neurons depends on the constraints in the movements: when the arm may move freely in 3D, it is uniform [1,2], but when it is constrained to a 2D movement, it is bimodal [3,4]. In this work, we aim at revealing the neuronal mechanism underlying the emergence of a bimodal PD distribution, by studying an artificial network of spiking neurons trained to control a biomechanically realistic arm model.
Methods

Our model is implemented in Tensorflow [5] and consists of 300 recurrent leaky integrate-and-fire neurons with 6 linear readout neurons that control the 6 muscles in a biomechanical arm model [4]. We train it to output muscle activation signals to perform a 2D reaching task. We study its output space by applying a Principal Component Analysis on the outputs and relate the directions in this space to the directions in the joint angle arm acceleration space via Canonical Correlation Analysis. We also study the effect of each recurrent neuron on the output dynamics by interpreting its outgoing connection weights as a direction in the space of the recurrent dynamics and projecting this direction onto the output space via the readout weights.
Results

The model neurons show directional tuning with bimodally distributed PDs. The output dynamics of the model are well captured by the first two principal components (PCs). The first PC aligns to the two opposite directions in the joint angle acceleration space which agree with the hand movement directions corresponding to the peaks of the bimodal PD distribution. The effects of neurons on the output dynamics concentrate around these directions. Connections between neurons with similar output effects tend to be strongly excitatory. Taken together, the core architecture of the recurrent network is characterized by two clusters of neurons with strong excitatory connections in each cluster. Connections between the clusters are mostly inhibitory.
Discussion

The analysis shows that two mutually inhibiting clusters of excitatory connections underlie the control of the biomechanical arm model in a 2D reaching task by a recurrent network of spiking neurons. Since each of the two clusters is composed of neurons with similar output effect directions, which we have shown to be related to hand movement directions, the existence of the two clusters naturally explains the bimodality of the hand movement PD directions. This leads to the question whether similar structures are employed in the mammalian brain to control movements, which would be subject to future research.





Acknowledgements

This work was partially performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE) and received funding from the Helmholtz Association of German Research Centres. This research was partially funded by the NRW-network 'iBehave', grant number: NW21-049.


References

1. https://doi.org/10.1523/JNEUROSCI.10-07-02039.1990
2. https://doi.org/10.1523/JNEUROSCI.08-08-02913.1988
3.https://doi.org/10.1016/j.neuron.2012.10.041
4. https://doi.org/10.7554/eLife.88591.3
5. https://doi.org/10.5281/zenodo.4724125


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P030: Synaptic Plasticity Mechanisms and Dynamics in the Cerebellar Spiking Microcircuit
Sunday July 6, 2025 17:20 - 19:20 CEST
P030 Synaptic Plasticity Mechanisms and Dynamics in the Cerebellar Spiking Microcircuit

Abdul H Butt*1, Marialaura De Grazia1, Emiliano Buttarazzi1 , Dimitri Rodarie1, Claudia Casellato1, D' Angelo Egidio1
1Department of Brain and Behavioural Science, University of Pavia, Pavia, Italy

*Email: abdulhaleembutt85@gmail.com

Introduction

Short-term plasticity (STP) is crucial for regulating excitatory and inhibitory information flow in the cerebellar cortex by modulating synaptic efficiency over seconds to minutes, acting as a dynamic filter for information processing. It shapes synaptic activity, alongside long-term plasticity (LTP) that arises from sustained stimulation. Both firing rates and spike timing affect plasticity, with distinct mechanisms across brain regions. This study introduces the Tsodyks-Markram STP model in cerebellar circuits reconstructed with detailed structural properties, and aims to integrate STP and LTP to explore their combined effects on cerebellar dynamics [1, 2].


Methods
The canonical cerebellar circuit, reconstructed and simulated as a point neuron network using Brain Scaffold Builder (BSB) interfaced with NEST [3,4], has been enhanced by incorporating short-term plasticity (STP) [1,2]. This involved adjusting the utilization parametersU, u, x, τ_fac, and τ_recto ensure proper facilitation and depression. The synaptic models were tested in both in-vitro and awakecanonical models of the mouse olivocerebellar microcircuit, focusing on both baseline firing rates and responses under specific stimulation protocols inspired from Pavlovian paradigms a input at mossy fibers (mf) at 40 Hz within the time window [1000–1260] ms and an impulse on the climbing fibers originating from the inferior olive (IO) as a 500 Hz burst within the time interval [1250–1260] [5].

Results
The single-cell pipeline confirmed that facilitation and depression function as expected. At high-frequency stimulation, facilitation prevails at the Glomeruli-Granule (Glom-GrC) synapses (Fig. 1A). The same phenomenon was investigated throughout the canonical circuit for each connection.Mean firing rates of each population show (Fig.1B-C) that STP plays a crucial role in the modulation of neuronal activity within the cerebellar cortical model. Purkinje cells (PCs) exhibit increased firing rates with STP, suggesting facilitation enhances their excitability.Basket cells and Deep Cerebellar Nuclei neurons (DCN, both _P (projecting) and _I (inhibitory)) exhibit reduced firing rates when STP is present, indicating synaptic depression reduces activity over time. Also the Inferior Olive neurons (IO) show a significant increase in their mean firing rate of IO stimulus when introducing STP.

Discussion
The results show the significant impact of STP in terms of signal propagation. Future work will explore the combination of STP-LTP which operate on two diffeent time scales and their interactions in sensorimotor loops during motor learning protocols [6, 7]. Specifically, LTP plasticity rules have been introducedon synapses at parallel fibers to Purkinje cells and parallel fibers to Molecular Layer Interneurons [6], utilizing awake version of the canonical cerebellar circuit.





Figure 1. Figure 1 A) Single-synapse STP dynamics, B) Canonical circuit in-vitro and awake (with STP vs static), response at step-like mf stimulus. C) Raster plot of the awake circuit (with STP vs static) with mf-IO stimulus paradigm
Acknowledgements
·The European Union’s Horizon Europe Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project)

·“National Centre for HPC, Big Data and Quantum Computing” (Project CN00000013 PNRR MUR – M4C2 – Fund 1.4 - Call “National Centers” - law decree n. 3138 16 december 2021)
References
● https://doi.org/10.1073/pnas.94.2.719
● https://doi.org/10.1152/jn.00258.2001
● https://doi.org/10.1038/s42003-022-04213-y
● https://doi.org/10.5281/zenodo.7243999
● https://doi.org/10.1371/journal.pcbi.1011277
● https://doi.org/10.1109/TBME.2015.2485301
● https://doi.org/10.1371/journal.pcbi.1008265


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P031: Adaptive Cerebellar Networks in Sensorimotor Loops
Sunday July 6, 2025 17:20 - 19:20 CEST
P031 Adaptive Cerebellar Networks in Sensorimotor Loops

Emiliano Buttarazzi* ¹, Marialaura De Grazia¹, Margherita Premi³, Egidio D’Angelo¹ ², Alberto Antonietti³, Claudia Casellato¹ ²

¹ Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
² Digital Neuroscience Center, IRCSS Mondino Foundation, Pavia, Italy
³ Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy

*Email: emiliano.buttarazzi@unipv.it

Introduction


Humans can generate accurate and appropriate motor behavior under many different environmental conditions [1]. Robotics is compatible for modelling this behavior, controlled by embodied human-like brain networks, along with monitorable and adjustable parameters [2] [3] [4]. To link cellular-level phenomena to brain architecture, wedevelop an efficient neuro-inspired controller for sensorimotor learning and control, based on specific brain neural structures and dynamics, employinga tandem configuration of forward and inverse internal models represented by cerebellar networks. Possible fall-out issimulating pathological states of patients with cerebellar-related movement disorders and predicting outcomes of neuro-rehabilitative treatments.
Methods

The system (Fig.1A) is made of two main components, theBRAIN, represented as a system of spiking neural networks (SNNs), and theBODY, represented in Pybullet, with proper interfaces (MUSIC or NRP). The cerebellar SNNs (Fig.1B) [5],using population-specific EGLIF neuron model [6], present a structure with “agonist-antagonist” functional subsections and include proper tuned long-term plasticity rules, to achieve an adaptive and physiologically accurate cerebellar model. The computational software is the Brain Scaffold Builder (BSB) [7], interfacing with NEST.
Results

The Long-Term Plasticities (Depression and Potentiation - LTD and LTP) have been implemented at: parallel fibers to Purkinje Cells and parallel fibers to Molecular Layer Interneurons. A Classical Eye Blinking Conditioning (CEBC) paradigm with 15 consecutive trials has been carried out to generate the learning curves in temporal association. Proper modulation of population firing rates along trials emerges (Fig.1C). The extension to an upper limb reaching task is under testing.
Discussion

Ongoing steps include the integration of short-term plasticity synaptic models with the long-term rules, and an optimal balance between forward and inverse cerebellar blocks, for stable and effective learning. Moreover, task complexity will be increased, simulating a reaching-grasping task with object-based action. Also, an integration of more physiological blocks (cortex and premotor cortex) is under development. Lastly, modification of the structural and functional parameters of the cerebellar modules to mimic cerebellar patients’ alterations is planned.





Figure 1. Figure 1: A) System block diagram, divided into BRAIN and BODY sections. B) Reconstruction of the cerebellar SNN, with the different populations ("plus" and "minus" only for differentiation between agonist and antagonist, respectively). C) Firing modulation driven by long-term plasticity rules, without (left) and with (right) complex spike (teaching) stimulus.
Acknowledgements
Work supported by:
·Horizon Europe Program for Research and Innovation under GA No. 101147319 (EBRAINS 2.0);
·The Italian Ministry of Research through PNRR projects funded by the EU, “Fit for Medical Robotics” (Project PNC0000007 MUR - “Fit4MedRob” - law decree prot. n. 0001984, 9 December 2022).
EB is a PhD student (National program) in AI, XXXIX cycle, Università Campus Bio-Medico di Roma.
References

1. https://doi.org/10.1016/S0893-6080(98)00066-5
2. https://doi.org/10.3389/fnbot.2021.634045
3. https://doi.org/10.1371/journal.pone.0112265
4. https://doi.org/10.1155/2019/4862157
5. https://doi.org/10.1073/pnas.1716489115
6. https://doi.org/10.3389/fncom.2019.00068
7. https://doi.org/10.1038/s42003-022-04213-y


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P032: Universal coarse-to-fine transition across the developing neocortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P032 Universal coarse-to-fine transition across the developing neocortex

Lorenzo Butti*1, Deyue Kong1, Nathaniel Powell2, Bettina Hein1, Jonas Elpelt1, Haleigh Mulholland2, Gordon Smith2, Matthias Kaschube1

1FIAS (Frankfurt Institute for Advanced Studies), Frankfurt am Main, DE

2Department of Neuroscience, University of Minnesota, Minneapolis, USA


*Email: butti@fias.uni-frankfurt.de


Introduction

How cortical representations emerge in development is an unresolved problem in neuroscience. Recent work in ferret shows that during early development, spontaneous activity exhibits a modular organization that is highly similar across diverse cortical areas, from sensory cortices to higher-order association areas [1]. Moreover, this modular organization persists in all areas after eye and ear-canal opening (approx. postnatal day 30), but the organization also changes, suggesting a considerable refinement over development, part of which may be area-specific [2]. It is currently unclear how this refinement unfolds on the level of local neural circuits and what mechanisms might guide this maturation.


Methods
We examine the development of network organization across diverse cortical regions (V1, A1, S1, PPC, PFC ), before (P21-24), around (P27-32) and after (P39-43) eye opening using both widefield and 2-photonin vivocalcium imaging of spontaneous activity in the ferret.

To gain mechanistic insight, we employ Local Excitation/Lateral Inhibition (LELI) network models, following [3]. These models can both reproduce the modular structure of early cortical activity and account for the ability of developing cortical circuits to transform unstructured input into modular output.
Results

We find that in both sensory and association areas, networks exhibit a highly similar pattern of changes over development: spontaneous activity is initially highly modular, i.e. strongly correlated and low-dimensional in local populations, becoming less correlated and higher-dimensional with age.
These in vivo changes can be explained by a developmental increase in lateral inhibition strength in a LELI model. This allows feedforward inputs to engage a larger number of network states, consistent with the transition of cortical networks to external sensory activity during this period. Moreover, the increase in inhibition predicts a decrease in modular wavelength over this same developmental time, which we confirm in our experimental data.
Discussion
Our findings indicate that the spontaneous activity in ferret cortex undergoes a developmental reorganization from coarser to finer-scaled organization, accompanied by a transition to more high-dimensional activity in both sensory and association areas. We propose that an increase in lateral inhibition serves as a common mechanism underlying cortical network refinement, and that this maturation leads to the expansion of representational capacity throughout the developing cortex.












Acknowledgements
We also thank the members of the Kaschube lab and the Smith lab for the useful discussions.


References
[1] N Powell, B Hein, D Kong, J Elpelt, HN Mulholland, M Kaschube, GB Smith. (2024).https://doi.org/10.1073/pnas.2313743121

[2]N Powell, B Hein, D Kong, J Elpelt, HN Mulholland, R Holland, M Kaschube, GB Smith.(2025).https://doi.org/10.1093/cercor/bhaf007
[3]HN Mulholland, M Kaschube, GB Smith.(2024).https://doi.org/10.1038/s41467-024-48341-x


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P033: Modeling Corticospinal Input Effects on V1-FoxP2 Interneurons in a Go Task Using NetPyNE
Sunday July 6, 2025 17:20 - 19:20 CEST
P033 Modeling Corticospinal Input Effects on V1-FoxP2 Interneurons in a Go Task Using NetPyNE

Andres F. Cadena Parra1*, Michelle Sanchez Rivera3, Constantinos Eleftheriou3, Roman Baravalle2, Ian Duguid3, Salvador Dura-Bernal2,4
1Department of Biomedical Engineering. Universidad de los Andes, Bogota, Colombia
2Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, USA
3Centre for Discovery Brain Sciences & Simons Initiative for the Developing Brain, University of Edinburgh, Edinburgh, UK.
4Center for Biomedical Imaging & Neuromodulation, The Nathan Kline Institute for Psychiatric Research.


*Email: af.cadena@uniandes.edu.co
Introduction

The execution of movement involves a complex interplay of neural structures whose activity results in precise motor output. The motor cortex generates commands for voluntary movement, conveyed via corticospinal neurons (CSNs) to the spinal cord, where interneurons (INs) integrate sensory feedback and motor commands to fine-tune motor neuron activity [1]. Among these V1 INs, and particularly those expressing FoxP2, are crucial for inhibitory feedback in motor control [2]. Building on recent findings that a subset of CSNs exhibit decreased firing rates during movement, this study aims to investigate how these CSN inputs influence V1-FoxP2 interneurons, shedding light on spinal integration mechanisms for coordinated motor output.
Methods
An in silico model was developed to study the effects of convergent increased/decreased corticospinal input on V1-FoxP2 INs. A single-cell model was implemented in NetPyNE/NEURON, incorporating Na⁺, K⁺, Ca²⁺ channels and AMPA dynamics. The cell’s morphology had a soma and four dendritic sections. Calibration used in vitro current-clamp data and optimization. Spike trains from a “Go task” from two different CSN subpopulations were connected to the V1-FoxP2 model, with simulated background activity consistent with in vivo observations [3]. Three conditions were tested: (1) increasing/decreasing input, (2) increasing/sustained input, and (3) increasing input only. Electrophysiological properties, like input resistance, were recorded over time.
Results
The model simulated in vivo V1-FoxP2 dynamics, where corticospinal input initially drives an increase in firing rate that peaks at movement onset, followed by a return to baseline. The in vivo condition optimizes the input-output relationship, exhibiting a high signal-to-noise ratio (SNR) post-movement and enabling a quicker return to baseline excitability. Additionally, background activity enhances the return to baseline of the V1-FoxP2 firing rate and input resistance. Notably, input resistance decreased progressively across time windows before, during, and after movement, making the neuron less susceptible to noise. The model further revealed that movement requires a specific ratio of increased and decreased CSN inputs.
Discussion
Our findings provide key insights into the neuronal computations that govern the integration of cortical inputs in the spinal cord. The model showed that in vivo-like corticospinal input enhances V1-FoxP2 activity time-locking to behavior without significantly reducing the SNR. This indicates a trade-off between temporal precision and firing rate strength to optimize motor control. Additionally, decreasing CSN input facilitates impedance recovery after movement, whereas in the sustained scenario, impedance fails to return to baseline. These results may inform future studies on the functional architecture of spinal circuits involved in motor control and rehabilitation, particularly in disorders affecting motor coordination.



Acknowledgements
This work was supported by the NIBIB U24EB028998 and NYS DOH01-C32250GG-3450000 grants. AFCP was supported by Universidad de los Andes through a Teaching Assistantship.
References
[1] Deska-Gauthier, D., & Zhang, Y. (2019). Functional diversity of spinal interneurons and locomotor control. Curr. Opin. Physiol., 8, 99–108. https://doi.org/10.1016/j.cophys.2019.01.005
[2] Bikoff, J. B., Gabitto, M. I., Rivard, A. F., et al. (2016). Spinal inhibitory interneuron diversity delineates motor microcircuits. Cell, 165, 207–219. https://doi.org/10.1016/j.cell.2016.01.027
[3] Schiemann, J., Puggioni, P., Dacre, J., et al. (2015). Behavioral state-dependent modulation of motor cortex output. Cell Rep., 11, 1319–1330.https://doi.org/10.1016/j.celrep.2015.04.042
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P034: Role of Synaptic Plasticity in the Emergence of Temporal Complexity in a Izhikevich Spiking Neural Network
Sunday July 6, 2025 17:20 - 19:20 CEST
P034 Role of Synaptic Plasticity in the Emergence of Temporal Complexity in a Izhikevich Spiking Neural Network

Marco Cafiso*1,2, Paolo Paradisi2,3

1Department of Physics 'E. Fermi', University of Pisa, Largo Bruno Pontecorvo 3, I-56127, Pisa, Italy
2Institute of Information Science and Technologies ‘A. Faedo’, ISTI-CNR, Via G. Moruzzi 1, I-56124, Pisa, Italy
3BCAM-Basque Center for Applied Mathematics, Alameda de Mazarredo 14, E-48009, Bilbao, BASQUE COUNTRY, Spain

*Email: marco.cafiso@phd.unipi.it
Introduction

Neural avalanches exemplify intermittent behavior in brain dynamics through large-scale regional interactions and are crucial elements of brain dynamical behaviors. Originally introduced in the Self-Organized Criticality framework, these intermittent complex behaviors can also be examined through Temporal Complexity (TC) theory. Computational neural network models have become central in the neuroscience field. Izhikevich’s neuron model [1] provides a powerful yet simple framework for simulating networks with over 20 brain-like dynamic patterns, enabling studies of normal and pathological conditions. Our work analyzes the temporal complexity of neural avalanches and coincidence events in an Izhikevich Spiking Neural Network, comparing systems with and without Spike-Time Dependent Plasticity (STDP) [2] processes.
Methods
A network of 1,000 Izhikevich neurons with an excitatory-to-inhibitory ratio of 4:1 was developed, designing inhibitory synaptic connections to exert a stronger influence than their excitatory counterparts, reflecting physiological neural circuit dynamics. We subjected the network to six distinct input signals, including two containing complex events. We then measured and compared the temporal complexity of network responses both with and without STDP plasticity mechanisms activated. Our temporal complexity assessment methodology leverages neural avalanche and coincidence events to estimate multiple scaling indices [3]. These metrics provide quantitative measures of the system’s complexity.
Results
The analysis of scaling indices related to temporal complexity reveals variations in complexity within neural avalanches and coincidences in simulations that incorporate the STDP plasticity rule, compared to those where it is absent. Furthermore, the extent of the change in temporal complexity depends on the simulation’s input signal. Specifically, strong and continuous signals lead to a substantial change in temporal complexity when the STDP rule is present, whereas intermittent signals exhibit smaller variations in complexity due to STDP.
Discussion
These preliminary results on the complexity behaviors of a spiking neural network with or without the STDP plasticity rule highlight how topological changes in the network configuration, due to time-dependent plasticity rules, lead to changes in temporal complexity behaviors. These results suggest that neural plasticity, defined as changes in the network’s spatial configuration, can influence the temporal complexity levels of a neuronal network, providing insights into the dynamic interplay between structural adaptation and the emergence of temporal complex behaviors in spiking neural networks.



Acknowledgements
This work was supported by the Next-Generation-EU programme under the funding schemes PNRR-PE-AI scheme (M4C2, investment 1.3, line on AI) FAIR “Future Artificial Intelligence Research”, grant id PE00000013, Spoke-8: Pervasive AI.
References
[1] Eugene M. Izhikevich. “Simple model of spiking neurons”. In: IEEE Transactions on Neural Networks 14.6 (2003), pp. 1569–1572.
[2] Natalia Caporale and Yang Dan. “Spike Timing–Dependent Plasticity: A Hebbian Learning Rule”. In: Annual Review of Neuroscience 31.Volume 31, 2008 (2008), pp. 25–46.
[3] P. Paradisi and P. Allegrini. “Intermittency-Driven Complexity in Signal Processing”. In: Complexity and Nonlinearity in Cardiovascular Signals. Ed. by Riccardo Barbieri, Enzo Pasquale Scilingo, and Gaetano Valenza. Cham: Springer, 2017, pp. 161–195.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P035: The geometry of primary visual cortex representations is dynamically adapted to task performance
Sunday July 6, 2025 17:20 - 19:20 CEST
P035 The geometry of primary visual cortex representations is dynamically adapted to task performance

Leyla Roksan Caglar*1,Julien Corbo*2, O.Batuhan Erkat2,3, Pierre-Olivier Polack2

1Windreich Department of AI & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
2Center for Molecular and Behavioral Neuroscience, Rutgers University—Newark, Newark, NJ, USA
3Graduate Program in Neuroscience, Rutgers University—Newark, Newark, NJ, USA

*Contributed equally; Email: l.r.caglar@gmail.com; julien.corbo@gmail.com
Introduction

Perceptual learning optimizes perception by reshaping sensory representations to enhance discrimination and generalization. Although these mechanisms’ implementation remains elusive, recent advances suggest that the neural geometry of the representations is key, by preparing population activity to be read out at the next processing stage. Our previous work has shown that learning a visual discrimination task reshapes the population feature representations in the primary visual cortex (V1) via suppressive mechanisms, effectively discretizing the representational space, and favoring categorization and generalization [1]. However, it is unclear how these changes impact the discriminability of the representations when being read out and transformed into a decision variable.
Methods
Recent findings under the Manifold Capacity Theory [2]suggest that learning enhances classification capacity by altering the geometric properties of population activity, increasing the linear separability of stimulus representations. To test this, we examined the relationship between V1 feature representation, neural manifold geometry, and behavioral discrimination, hypothesizing that the previously observed discretization would enhance classification capacity and alter manifold geometry as early as V1. Using calcium imaging, we compared V1 activity between trained and naïve mice performing an orientation discrimination Go/NoGo task at varying difficulty levels.
Results
Investigating response dimensionality, we found it increased as the Go/NoGo stimuli became more similar in both trained and naïve mice. As predicted, dimensionality was lower in trained animals, suggesting the task's biological implementation relies on reducing representational dimensionality. However, dimensionality alone did not fully explain performance variability. Instead, we found that the linear separability of representations in their embedding space was a stronger predictor of individual behavioral performance. This separability of manifolds was further evidenced by measuring the neural manifold’s capacity and their geometric properties (manifold dimension and manifold radius), which all show a decrease with successful behavioral performance in the trained mice, but show no change in the naive mice.
Discussion
Taken together, our results show a clear relationship between behavioral task performance, representational dimensionality, and manifold separability in the early visual cortex of mice. Across all computational measures, we demonstrated an inverse relationship between dimensionality and successful perceptual discrimination assisted by representational separability.These results confirm that learning alters the geometric properties of early sensory representations as early as in V1, optimizing them for linear readout and improving perceptual decision-making.




Acknowledgements
The authors are grateful to the members of the Polack lab for the helpful conversations. This work was funded by The Whitehall Foundation (grant 2015-08-69). The Charles and Johanna Busch Biomedical Grant Program The National Institutes of Health National Eye Institute: Grant #R01 EY030860 Brain initiative: Grant #R01 NS120289) Fyssen Foundation postdoctoral fellowship.
References
[1] Corbo, J., Erkat, O. B., McClure, J., Khdour, H., & Polack, P.-O. (2025). Discretized representations in V1 predict suboptimal orientation discrimination.Nature Communications,16(1), 41. https://doi.org/10.1038/s41467-024-55409-1
[2]Chung, S., Lee, D. D., & Sompolinsky, H. (2016). Linear readout of object manifolds.Physical Review E,93(6), 060301. https://doi.org/10.1103/PhysRevE.93.060301

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P036: Dendrites competing for weight updates facilitate efficient familiarity detection
Sunday July 6, 2025 17:20 - 19:20 CEST
P036 Dendrites competing for weight updates facilitate efficient familiarity detection

Fangxu Cai1,Marcus K. Benna*2


1Department of Physics, UC San Diego, La Jolla, USA


2Department of Neurobiology, UC San Diego, La Jolla, USA


*Email: mbenna@ucsd.edu
Introduction

The dendritic tree of a neuron plays an important role in the nonlinear processing of incoming signals. Previous studies [1-3] have suggested that during learning, selecting only a few dendrites to update their weights can enhance the memory capacity of a neuron by reducing interference between memories. Building on this, we examine two strategies for selecting dendrites: one with and one without interaction between dendrites. The interaction between dendrites serves to reduce variability in the number of dendrites updated, potentially arising from competition and the allocation of resources necessary for long-term synaptic plasticity.

Methods
We study a model with parallel dendrites, each performing nonlinear processing and connected in parallel to the soma, which sums their contributions [4]. The selection of dendrites to update is based on their activation level — the overlap between their weight and input vectors. Under the non-interacting rule, a dendrite is selected if its activation exceeds a specific threshold; under the interacting rule, only the top n dendrites with the highest activations are chosen. We compare these two learning rules using an online familiarity detection task [1]. In this task, input patterns are streamed sequentially to the neuron, which is required to produce a high response to previously presented inputs while maintaining a low response to unfamiliar ones.

Results
We observe that the interacting learning rule achieves a significantly higher memory capacity than the non-interacting rule by 1) limiting the variance of the memory response, and 2) decorrelating synaptic weights when input signals are correlated across dendrites. With the interacting rule, the best achievable memory capacity increases as n decreases, reaching its maximum at n = 1. In contrast, this is not the case for the non-interacting rule, where the capacity declines when too few dendrites are updated. We further find that even when inputs are maximally correlated (all dendrites receive identical input), the interacting rule maintains a capacity comparable to the uncorrelated input scenario.
Discussion
Our findings show that an n-winners-take-all type interaction among dendrites to determine their eligibility for long-term plasticity can better leverage dendritic nonlinearities for optimizing memory capacity, especially when inputs are correlated among dendrites. While biological neurons may not strictly select a fixed number of dendrites to store each input, our model suggests that reducing the variability in the number of updated dendrites through competition between them can still improve the capacity. Furthermore, our results are robust to variations in model specifics, such as the choice of dendritic activation functions and the presence of input noise, underscoring the generality of the proposed mechanism.




Acknowledgements
M.K.B was supported by R01NS125298 (NINDS) and the Kavli Institute for Brain and Mind.

References
1. https://doi.org/10.1371/journal.pcbi.1006892
2. https://doi.org/10.1523/JNEUROSCI.5684-10.2011
3. https://doi.org/10.1038/nature14251
4. https://doi.org/10.1109/JPROC.2014.2312671

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P037: Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuronal data.
Sunday July 6, 2025 17:20 - 19:20 CEST
P037 Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuronal data.

Pedro V. Carelli,*1, Felipe Serafim1, Mauro Copelli1

1Departmento de Física, Universidade Federal de Pernambuco, Recife, Brasil

*Email: pedro.carelli@ufpe.br


Introduction
An important working hypothesis to investigate brain activity is whether it operates in a critical regime[1,2]. Recently, maximum-entropy phenomenological models have emerged as an alternative way of identifying critical behavior in neuronal data sets[3]. In the present work, we investigate the signatures of criticality from a firing rate-based maximum-entropy approach on data sets generated by computational models, and we compare them to experimental results.
Methods
We simulate critical and noncritical spiking neuronal models [4] and generate spiking time series. Then, following Mora et al [3], a Boltzmann-like distribution is defined. We consider as observable the firing rates Kt, and constrain the probability distribution in two different times, Pu(Kt,Kt+u), obtaining the energy function. We then solve an inverse problem to fit the model parameters to the data statistics. Once the model is adjusted to describe the data, we can perform statistical physics analysis, and the signatures of criticality are obtained from the divergence of the model's generalized specific heat.
Results
We found that the maximum entropy approach consistently identifies critical behavior around the phase transition in models and rules out criticality in models without phase transition. The maximum-entropy-model results are compatible with results for cortical data from urethane-anesthetized rats [4] and human MEG.
Discussion

We detect signatures of criticality in different brain data sets by employing a maximum entropy approach based on neuronal population firing rates. This method diverges from conventional techniques that depend on estimating critical exponents through power law distributions of neuronal avalanche sizes and durations. It proves especially useful in scenarios where traditional markers of criticality derived from neuronal avalanches are either methodologically unreliable or yield ambiguous results. Our results providefurther support for criticality in the brain.



Acknowledgements
We thankfully acknowledge the funding from CNPq, FACEPE, CAPES and FINEP.


References
1. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. Journal of neuroscience. 2003 3;23(35):11167-77.
2. FONTENELE, A. J. et al. Criticality between cortical states. PHYSICAL REVIEW LETTERS, v. 122, p. 208101, 2019.
3. Mora T, Deny S, Marre O. Dynamical criticality in the collective activity of a population of retinal neurons. Physical review letters. 2015 Feb 20;114(7):078105.
4. SERAFIM, F. et al. Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuron data. Phys Rev E, v. 110, p. 024401, 2024.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P038: Mimicking Ripple- and Spindle-Like Dynamics in an Amplitude and Velocity-Feedback Oscillator
Sunday July 6, 2025 17:20 - 19:20 CEST
P038 Mimicking Ripple- and Spindle-Like Dynamics in an Amplitude and Velocity-Feedback Oscillator

Pedro Carvalho*1, Wolf Singer1, Felix Effenberger1


1Ernst Strüngmann Institute, Singer Lab, Frankfurt am Main/Hessen, Germany
*Email: prfdecarvalho@gmail.com


Introduction :Ripples and spindles play a fundamental role in learning, memory, and sleep [1]. Yet, the principles of their generation and their functional relevance remain to be fully understood. Here, we show how damped harmonic oscillators (DHOs) subject to feedback can reproduce such characteristic dynamics on the population level (Fig. 1B,C). In our model, one DHO represents the aggregate activity of a recurrently coupled E-I population of spiking neurons [3] and can capture different characteristics of the underlying E-I circuit (e.g. recurrent excitation and inhibition) by feedback connections [2]. Recurrent networks of such nodes were previously shown to reproduce many physiological phenomena [2].



Methods:Using an analytically derived bifurcation diagram (see [2]), we investigate the dynamics of a DHO with feedback along different points in the 2d parameter space (W, b) of the velocity feedback parameter w and the amplitude of a harmonic input b (Fig. 1A). We determine dynamics for different parameter paths (colored lines in Fig. 1A) by performing numerical simulations of the DHO dynamics subject to a harmonic drive. We observe nodal dynamics similar to ripples and spindles (Fig.1B,C) [1].

Results:We show that for a DHO with velocity feedback, the interplay between input frequency (data not shown), the oscillator’s natural frequency, and the trajectory of input parameters in the (b, W) parameter subspace (Fig. 1A) gives rise to dynamics resembling spindles and ripples (Fig. 1B,C). Notably, for each class of these characteristic dynamics, we can identify a specific parameter path in the (b, W) parameter subspace resulting in their generation (Fig. 1A, colored lines). These dynamics are due to a dynamic bifurcation, in which the system transitions between subcritical and supercritical regimes separated by a Hopf bifurcation. In this configuration, ripple- and spindle-like dynamics emerge as a transient phenomenon.
Discussion:By studying the dynamics of DHOs subject to velocity feedback, we show that these oscillators can reproduce ripple- and spindle-like dynamics [1] in an intriguingly simple phenomenological model of the aggregate activity of E-I populations [2,3]. These complex dynamics are shown to result from input-driven dynamic bifurcations of the underlying DHO system. This provides a reductionistic model of ripple and spindle initiation in which simple mechanisms result in complex dynamics (see also [3]). We hope that this model will allow for a better understanding the mechanisms of spindle and ripple initiation, as well as to allow for assessing their role in information processing and consolidation (compare [2]), a topic left for a future study





Figure 1. Ripples and spindles produced by a velocity feedback DHO. A) Bifurcation diagram in the input amplitude (b) and velocity feedback (W) parameter subspace. Blue: stable focus, orange: limit cycles, green and red line parameter paths producing spindles and ripples. B) Reproduction of data from [2]. C) Simulation of ripple and spindle-like dynamics. Colors match parameters paths in (A).
Acknowledgements
-
References
[1] Staresina, B.P. et al. How coupled slow oscillations, spindles and ripples coordinate
neuronal processing and communication during human sleep. Nat. Neurosci. (2023).


[2]Spyropoulos, G.et al.Spontaneous variability in gamma dynamics described by a damped harmonic oscillator driven by noise.Nat Commun13, 2019 (2022)


[3] F. Effenberger, P. Carvalho, I. Dubinin, & W. Singer, The functional role of oscillatory
dynamics in neocortical circuits: A computational perspective, Proc. Natl. Acad. Sci.(2025).



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P039: In Silico Safety and Performance Assessment of Vagus Nerve Stimulation with Metamodeling-Based Uncertainty/Variability Quantification
Sunday July 6, 2025 17:20 - 19:20 CEST
P039 In Silico Safety and Performance Assessment of Vagus Nerve Stimulation with Metamodeling-Based Uncertainty/Variability Quantification

Antonino M. Cassara'*1, Javier Garcia Ordonez1, Werner Van Geit1, Esra Neufeld1

1Foundation for Research on Information Technologies in Society (IT’IS), Zurich, Switzerland

*Email: cassara@itis.swiss

Introduction

Safety and efficacy assessments of medical devices are key to regulatory submissions. We established anin silicopipeline for neural interface assessment and demonstrated it for a vagus nerve stimulation (VNS) cuff electrode. It combines histology-based electromagnetic, electrophysiology, and thermal simulations, as well as tissue damage predictors, with high throughput screening of data from the NIH SPARC program [1], and systematic uncertainty quantification to assess safety and shed light on primary concerns, dominant factors, variability, and model limitations. This study serves to guide the development and application of regulatory-gradein silicomethodologies for safer, more effective medical technologies.



Methods
Evaluated quantities-of-interest (QoIs) included iso-percentiles of dosimetric exposure quantities, current intensities and densities, charge injection, off-target stimulation predictors, tissue heating, as well as tissue damage predictors – all as a function of varying degrees of fiber recruitment. The pipeline is implemented on the o2S2PARC platform for open and FAIR computational modeling [2] using modeling functionalities from Sim4Life [3]. Variability was quantified through iteration over histological samples from different subjects, multiple sources of numerical uncertainty were quantified, and model parameter uncertainties (e.g., tissue properties, fiber statistics) were propagated using advanced surrogate modeling methodologies.

Results

E-field thresholds were compared to safety guidelines [4], temperature increases to FDA limits [5], and the commonly applied (though questionably relevant) Shannon’s Criteria [6] was evaluated. The surrogate model-based uncertainty propagation permitted to shed light on complex correlations between model parameters and QoIs, fully accounting for non-linear dependencies and multi-factor interactions, and revealing novel mechanistic insights.
Discussion
The fully automatized pipeline enables quantitative safety assessment for a wide variety neural interfaces for bioelectronic medicine. It supports electrode design optimization towards improved safety and effectivity, and the identification of safe therapeutic windows. The systematic uncertainty analysis using advanced surrogate-model-based techniques illustrates the value of o2S2PARC intelligent metamodeling framework and scalable cloud resources for exploring large parameter spaces. In conclusion, carefully executed, regulatory-gradein silicosafety assessment is a powerful tool for accelerating medical device innovation.

Figure 1. Figure 1. (a) Safety assessment pipeline on o2S2PARC; (b) histology-based nerve model-generation and population with electrophysiological fiber models; (c) visualization of selected dosimetric and thermal distributions; (d) cross sections through the surrogate models with associated interpolation uncertainty; uncertainty propagation of EM and thermal tissue properties through QoI surrogate models.
Acknowledgements
“This research is supported by the NIH Common Fund’s SPARC program under award3OT3OD025348-01S8”
References
[1] NIH SPARC program, USA.https://commonfund.nih.gov/sparc
[2] Neufeld E. et al. 2020. SPARC’s Open Online Simulation Platform: o2S2PARC. FASEB J 34(S1).
[3] Sim4Life, ZMT Zurich MedTech AG, Zurich, Switzerland.
[4] ICNIRP. 2010. Guidelines for exposure to time-varying EM fields (1 Hz–100 kHz). Health Phys 99(6):818-36.
[5] FDA guidance on thermal effects:https://www.fda.gov.
[6] Shannon RV. 1992. Safe levels for electrical stimulation. IEEE Trans Biomed Eng 39(4):424-6.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P040: Modeling Transcranial Magnetic Stimulation: From Exposure in Personalized Head Models to Single Cell and Brain Network Dynamics Responses
Sunday July 6, 2025 17:20 - 19:20 CEST
P040 Modeling Transcranial Magnetic Stimulation: From Exposure in Personalized Head Models to Single Cell and Brain Network Dynamics Responses

Antonino M. Cassara*1, Serena Santanche2, Micol Colella2, Chiara Billi1, Micaela Liberti2, Esra Neufeld1

1Foundation for Research on Information Technologies in Society (IT'IS), Zurich, Switzerland
2Universita La Sapienza, Rome, Italy

*Email: cassara@itis.swiss
Introduction

To facilitate the design and interpretation of human studies involving Transcranial Magnetic Stimulation (TMS), we developed a cloud-based and web-accessible computational framework that enables the execution of subject-specific virtual TMS experiments towards assessing and optimizing safety and efficacy. It also facilitates the formulation of testable hypotheses regarding stimulation mechanisms across various temporal and spatial scales, including mechanisms by which induced electric fields (E-fields) interact with individual neurons and high level brain network dynamics.

Methods
The framework extends a previously established pipeline [1] for non-invasive brain stimulation modeling, which combined image-based generation of detailed head models (personalized anatomy and tissue properties), personalized electromagnetic simulations (exposure and lead-fields for virtual EEG), image-based brain network model construction (mean field), and dynamic functional connectivity assessment. The current work extends this pipeline with a) TMS coil modeling and positioning, b) neuron polarization and stimulation probability mapping based on statistical sub- and supra-threshold responses of morphologically-detailed cortical neuron populations, and c) derived coupling terms for the assessment of TMS impact on network dynamics.

Results
The pipeline was employed to investigate TMS stimulation mechanisms at the single-cell and the network dynamics level. Key findings include: the dielectric contrast between gray and white matter is insufficient to directly induce spiking; mapping functions for population- and orientation-dependent threshold E-fields probabilities have been established for various pulse shapes (Figure 1), offering insights into the stimulability of different neuronal populations; electrophysiology-based activation maps have been generated for simplified models of commercial TMS coils under relevant stimulation conditions. Model validation is ongoing.

Discussion
Our pipeline extends prior modeling work [1-3] to provide a customizable framework for investigating TMS mechanisms and designing virtual clinical trials. Probability maps link dosimetric exposure predictions with electrophysiological responses that in turn modulate brain network dynamics. The pipeline serves to shed light on interaction mechanisms on to help design superior stimulation paradigms, tuned towards optimization electrophysiological response, with improve selectivity, effectivity, and safety.




Figure 1. Figure 1. (a) Illustration of the segmented head model, with 40 tissues; (b) example of user-defined TMS coils; (c) final model, featuring the optimally placed TMS coil; (d) neuronal population, orientation and pulse-specific threshold E-fields; (d) spiking threshold maps for several neuronal populations.
Acknowledgements
“This research is supported by the NIH Common Fund’s SPARC program under award3OT3OD025348-01S8”
References
[1] Karimi, F., et al. (2025). Precision non-invasive brain stimulation: an in silico pipeline for personalized control of brain dynamics. J. Neural Eng., 10.1088/1741-2552/adb88f.
[2] Aberra, A.S., et al. (2020). Simulation of TMS in head model with morphologically-realistic cortical neurons. Brain Stimul., 13(1):175-189.
[3] Jansen, B.H., Rit, V.G. (1995). EEG and VEP generation in a model of coupled cortical columns. Biol. Cybern., 73:357–366.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P041: Balanced inhibition allows for robust learning of input-output associations in feedforward networks with Hebbian plasticity
Sunday July 6, 2025 17:20 - 19:20 CEST
P041 Balanced inhibition allows for robust learning of input-output associations in feedforward networks with Hebbian plasticity

Gloria Cecchini*1, Alex Roxin1

1Centre de Recerca Matemàtica, Barcelona, Spain

*Email: gcecchini@crm.cat

Introduction

In neural networks, post-synaptic activity depends on multiple pre-synaptic inputs. Hebbian plasticity allows sensory inputs to be associated with internal states, as seen in the CA1 region of the hippocampus. By modifying synaptic weights, Hebbian rules enable sensory inputs to elicit correlated outputs, allowing for efficient memory storage. When input and output patterns are uncorrelated, numerous associations can be encoded. However, if output patterns weakly correlate with input patterns, Hebbian learning reinforces shared synapses across patterns, leading to reduced network flexibility and impaired associative learning.


Methods
We analyzed the effects of Hebbian plasticity in a feedforward network model where input-output correlations emerge due to intrinsic connectivity. Using numerical simulations, we examined how weak correlations between inputs and outputs shape synaptic weight dynamics over time. We then introduced a balanced inhibition mechanism, inspired by in-vivo cortical circuits [1], to assess its impact on synaptic weight distribution and the network’s ability to store diverse associations. Network performance was evaluated by measuring output pattern variability.


Results
Our results show that when weak correlations exist between input and output patterns, Hebbian learning selectively strengthens synapses shared across patterns. This reinforcement leads to a rigid network state, where outputs become highly correlated over time. Consequently, the network loses the ability to store multiple distinct associations, significantly reducing its learning capacity. However, introducing balanced inhibition prevents the over-strengthening of shared synapses, allowing output patterns to remain distinct and ensuring a more flexible associative learning process.


Discussion
These findings highlight a fundamental limitation of Hebbian learning in feedforward networks when input-output correlations exist. Without a regulatory mechanism, the network structure becomes overly rigid, preventing effective storage of new associations. Balanced inhibition emerges as a simple yet effective strategy to mitigate this issue, preserving learning flexibility by counteracting correlation-driven synaptic reinforcement. Our study underscores the critical role of inhibition in biological neural circuits, offering insights into how the brain maintains efficient and adaptive information processing.




Acknowledgements
This project has received funding from Proyectos De Generación De Conocimiento 2021 (PID2021-124702OB-I00). This work is supported by the Spanish State Research Agency, through the Severo Ochoa and Maria de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M). We thank CERCA Programme/Generalitat de Catalunya for institutional support.
References
1. Bilal Haider, Alvaro Duque, Andrea R. Hasenstaub, David A. McCormick (2006) Neocortical Network Activity In Vivo Is Generated through a Dynamic Balance of Excitation and Inhibition. Journal of Neuroscience, 26 (17) 4535-4545; https://10.1523/JNEUROSCI.5297-05.2006
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P042: Feedforward and Feedback Inhibition Flexibly Modulates Theta-Gamma Cross-Frequency Interactions in Neural Circuits
Sunday July 6, 2025 17:20 - 19:20 CEST
P042 Feedforward and Feedback Inhibition Flexibly Modulates Theta-Gamma Cross-Frequency Interactions in Neural Circuits

Dimitrios Chalkiadakis*1,2, Jaime Sánchez-Claros1, Víctor J López-Madrona3, Santiago Canals2, Claudio R. Mirasso1

1Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC),Consejo Superior de Investigaciones Científicas (CSIC) - Universitat de les Illes Baleares(UIB), Palma de Mallorca, Spain
2Instituto de Neurociencias, Consejo Superior de Investigaciones Científicas (CSIC) - Universidad Miguel Hernández (UMH), Sant Joan d’Alacant, Spain
3Institut de Neurosciences des Systèmes, Aix Marseille Univ -Inserm,Marseille, France

*Email:dimitrios@ifisc.uib-csic.es
Introduction
Brain rhythms are essential for coordinating neuronal activity. Cross-frequency coupling (CFC), particularly between theta (~8 Hz) and gamma (~30 Hz) rhythms, is critical for memory formation [1]. Traditionally, CFC was attributed to slow oscillations modulating faster activity at specific phases. However, metrics such as Cross-Frequency Directionality (CFD) have revealed bidirectional interactions, with both slow-to-fast and fast-to-slow influences [1, 2]. Here, we introduce a computational circuit model that flexibly exhibits both directionality interactions based on the balance of inhibitory feedforward and feedback motifs. Our framework is supported by electrophysiology measurements in the rat’s hippocampus.


MethodsWe analyzed two motifs based on variations of the (Pyramidal) Interneuron Network Gamma (PING/ING) models, both of which generate gamma rhythms through interactions between pyramidal cells (PCs) and inhibitory basket cells (BCs). An external theta drive modulatedthe network’s activity, inducing cross-frequency interactions (see Fig. 1). Somatic transmembrane currents were computed for cross-frequency dynamics analysis.Our model was validated using the experimental dataset presented in [1], which includes a detailed analysis of pathway-specific field potentials reflecting the activity of Entorhinal Cortex layer III (ECIII) projections to the hippocampal CA1 area in rats navigating both familiar and novel environments.


Results
Our analysis revealed that in θ-ING motifs, feedforward recruitment of BCs drives gamma-to-theta directionality (CFD<0), while in θ-PING motifs, feedback inhibition favors theta-to-gamma directionality (CFD>0, Fig. 1b-iii vs 1c-iii). In combined motifs, varying synaptic strengths within realistic ranges, we found smooth transitions between directionalities (Fig. 1d). Experimental data validated our framework, as behavioral conditions modulated CFD and gamma frequency in line with our model predictions (Fig. 1e). Finally, by evaluating each motif’s capacity to integrate distinct inputs impinging at different sites of the PC dendritic tree, we report their differential role in prioritizing transmission across different information channels.


Discussion
Our framework suggests that feedforward/feedback inhibitory balance regulates the directionality of theta-gamma interactions. Notably, θ-ING/θ-PING modes exist along a continuum rather than as distinct alternatives. In our model, CFD analysis identified transitions between functional modes, aligning with experimental observations across different behavioral states.
We further showed that a feedback-shifted balance promotes strong afferent-driven cross-frequency rhythmicity, while a feedforward-shifted motif broadens encoding windows,favoring parallel pathway transmission. Thus, dynamic CFD measures may reflect predominant inhibitory motifs and flexible prioritization of functional connectivity pathways.



Figure 1. Figure 1. (a) Motifs’ connections with dashed lines differentiating θ-ING (purple) from θ-PING (blue). (b, c) Cross-frequency interactions in θ-ING and θ-PING. (i) Transmembrane currents (gray) with PC/BC spikes in blue/orange. (ii) Cross-frequency coupling. (iii) CFD. (d) Mixed θ-ING/θ-PING motifs show CFD changes inversely to peak γ. (e) Experiments confirm the CFD–γ peak relationship of (d).
AcknowledgementsD. C., J. C. and C. M. acknowledge support from the Spanish Ministerio de Ciencia, Innovación y Universidades through projects PID2021-128158NB-C22 and María de Maeztu CEX2021-001164-M. D. C. and S. C. acknowledge support from the Spanish Ministerio de Ciencia, Innovación y Universidades through projects PID2021-128158NB-C21 and Severo Ochoa CEX2021-001165-S
References
[1] López-Madrona, V. J., Pérez-Montoyo, E., Álvarez-Salvado, E., Moratal, D., Herreras, O., Pereda, E., … Canals, S. (2020). Different Theta Frameworks Coexist in the Rat Hippocampus and Are Coordinated during Memory-Guided and Novelty Tasks.eLife,9, e57313. doi:10.7554/eLife.57313
[2] Jiang, H., Bahramisharif, A., Van Gerven, M. A. J., & Jensen, O. (2015). Measuring Directionality between Neuronal Oscillations of Different Frequencies.NeuroImage,118, 359–367. doi:10.1016/j.neuroimage.2015.05.044
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P043: LEC Ensemble Analysis Reveals Context-Guided Odor Encoding via Adaptive Spatio-Temporal Representations
Sunday July 6, 2025 17:20 - 19:20 CEST
P043 LEC Ensemble Analysis Reveals Context-Guided Odor Encoding via Adaptive Spatio-Temporal Representations

Yuxi Chen*1, Noga Mudrik*2, James J. Knierim1, Adam S. Charles2

1Department of Neuroscience, Mind/Brain Institute, Johns Hopkins University, Baltimore, USA
2Department of Biomedical Engineering, Kavli NDI, Center of Imaging Science, Johns Hopkins University, Baltimore, USA
*Email: ychen315@jhu.edu; nmudrik1@jhu.edu


Introduction.The lateral entorhinal cortex (LEC) has rich associative connections in the rat cortex, linking the hippocampus and neocortex. It encodes spatial context and develops odor selectivity [2,3]. A key question is how LEC enables context-odor integration. Using Neuropixels, we recorded LEC activity while rats performed an odor-context task that included identifying an odor and selecting the corresponding reward port, with the reward-port location switching between two contexts defined by the box the rat occupied (Fig. A). We further developed an ensemble-identification method to reveal how hidden LEC ensembles support context-odor integration via adaptive representation pre- vs. post-training.Methods.We recorded rats’ LEC both before and after learning odor-context associations (“day 1” vs. “day N”). Each day had 6 sessions, with the rat alternately placed in one of two boxes that provide context through unique cues (Fig. A). Spike sorting was done with Kilosort4, and firing rate (FR) was estimated via Gaussian convolution, producing a multi-label tensor dataset (Fig. B). We developed a graph-driven ensemble method that extends [4] to multi-class data, identifies state-dependent ensemble composition (A) adjustments (Fig. D), and captures per-trial temporal variability in ensemble traces (φ). We tested the ensembles’ encoding of odor vs. box via 5-fold cross-validation logistic regression.Results.Neurons vary by odor, box, or both (Fig. C), suggesting ensembles. We found ensembles with session adjustments (Fig. H), that temporally encode box (Fig. E). On day 1, ensemble traces differentiate boxes over time; while on day N, boxes are more separated at trial start. Session-trace averages under fixed box (Fig. F) show more variability on day 1 compared to day N, which featured consistency of same-box sessions. Odor encoding is less apparent, and, on day N, is primarily revealed under a fixed box (Fig. G). Odor prediction accuracy improved when conditioned on box (Fig. I), with a larger improvement on day N. Odor feature importance shows that conditioning on the box shifts encoding timing, with later points more important under fixed-box.Discussion.We identified session-adjusting ensembles that capture box encoding, with earlier encoding on day N. On day 1, rats show distinct representations for same-box sessions, suggesting session-by-session encoding, while on day N, consistent same-box session activations suggest box recognition. We hypothesize that post-training, rats first identify the box, which opens an 'odor-integration gate'. This aligns with improved odor-encoding accuracy and the shift in odor timing importance when conditioned on the box compared to marginalized. Our findings suggest that over training, the LEC develops a hierarchical mechanism for context-odor integration that starts with early context identification, followed by box-conditioned odor integration.



Figure 1. A: Experiment. B: Multi-class data across box-odor-sessions. C: Single neuron traces. D: Ensembles-adjusting approach leveraging [4]. E–G: Ensemble traces by box (E), session (F), and odor (G, left: marginalizing, right: conditioning on box). H: Two day-N ensembles adjusting by session. I: Odor prediction confusion matrices ± box conditioning. J: Ensemble/time point importance for odor encoding.
Acknowledgements

Y.C. and J.J.K. were fundedbyNIA grant P01 AG009973.N.M. was funded by the Kavli Foundation Neurodiscovery Award and as a Kavli Fellow of Johns Hopkins Kavli NDI. A.S.C was supported by NSF CAREER Award 2340338 and a Johns Hopkins Bridge Grant.
References

Bota, M., Sporns, O., & Swanson, L. W. (2015). Architecture of the cerebral cortical association connectome underlying cognition.PNAS.

Igarashi, K. M., et al. (2014). Coordination of entorhinal–hippocampal ensemble activity during associative learning.Nature.

Tsao, A., Sugar, J., Lu, L., Wang, C., Knierim, J. J., Moser, M. B., & Moser, E. I. (2018). Integrating time from experience in the lateral entorhinal cortex.Nature.

Mudrik, N., Mishne, G., & Charles, A. S. (2024). SiBBlInGS: Similarity-driven Building-Block Inference using Graphs across States. ICML.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P044: Uncertainty-Calibrated Network Initialization via Pretraining with Random Noise
Sunday July 6, 2025 17:20 - 19:20 CEST
P044 Uncertainty-Calibrated Network Initialization via Pretraining with Random Noise

Jeonghwan Cheon*1, Se-Bum Paik1

1Department of Brain and Cognitive Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

*Email: jeonghwan518@kaist.ac.kr


Uncertainty calibration — the ability to estimate predictive confidence that reflects the actual accuracy — is essential to real-world decision-making. Human cognition involves metacognitive processes, allowing us to assess uncertainty and distinguish between what we know and what we do not know. In contrast, current machine learning models often struggle to properly calibrate their confidence, even though they have achieved high accuracy in various task domains [1]. This miscalibration presents a significant challenge in real-world applications, such as autonomous driving or medical diagnosis, where incorrect decisions can have critical consequences. Although post-processing techniques have been used to address calibration issues, they require additional computational steps to obtain reliable confidence estimates. In this study, we show that random initialization — a common practice in deep learning — is a fundamental cause of miscalibration. We found that randomly initialized, untrained networks exhibit excessively high confidence despite lacking meaningful knowledge. This miscalibration at the initial stage prevents the alignment of confidence with actual accuracy as the network learns from data. To address this issue, we draw inspiration from the developmental brain, which is initialized through spontaneous neural activity even before receiving sensory inputs [2]. By mimicking this process, we pretrain neural networks with random noise [3] and demonstrate that this simple approach resolves the overconfidence issue, bringing initial confidence levels to near chance. This pre-calibration through random noise pretraining enables optimal calibration by aligning confidence levels with actual accuracy during subsequent data training. As a result, networks pretrained with random noise achieve significantly lower calibration errors compared to those trained solely with data. We also confirmed that this method generalizes well across different conditions, regardless of dataset size or network complexity. Notably, these pre-calibrated networks consistently identify “unknown data” by showing low confidence for outlier inputs. Our findings present a key solution for calibrating uncertainty in both in-distribution and out-of-distribution scenarios without the need for post-processing. This provides a fundamental approach to addressing miscalibration issues in artificial intelligence and may offer insights into the biological development of metacognition.
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grants (NRF-2022R1A2C3008991 to S.P.) and by the Singularity Professor Research Project of KAIST (to S.P.).
References
● Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks. InInternational Conference on Machine Learning(pp. 1321-1330). PMLR.
● Martini, F. J., Guillamón-Vivancos, T., Moreno-Juan, V., Valdeolmillos, M., & López-Bendito, G. (2021). Spontaneous activity in developing thalamic and cortical sensory networks.Neuron,109(16), 2519-2534.
● Cheon, J., Lee, S. W., & Paik, S. B. (2024). Pretraining with random noise for fast and robust learning without weight transport.Advances in Neural Information Processing Systems,37, 13748-13768.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P045: Cortical Microcircuit Modeling for Concurrent EEG-fMRI Recordings
Sunday July 6, 2025 17:20 - 19:20 CEST
P045 Cortical Microcircuit Modeling for Concurrent EEG-fMRI Recordings

Shih-Cheng Chien*1, Stanislav Jiříček1,2,3, Thomas Knösche4, Jaroslav Hlinka1,2, Helmut Schmidt1


1Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic
2National Institute of Mental Health, Klecany, Czech Republic
3Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
4Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany


*Email:chien@cs.cas.cz
Introduction

EEG and fMRI are widely used noninvasive methods for human brain imaging. Concurrent EEG-fMRI recordings help answer fundamental questions about the functional roles of EEG rhythms, their origin, and their relationship with the BOLD signal. Given the fact that both EEG and BOLD signals predominantly originate from postsynaptic potentials (PSPs) [1,2], and considering that distinct inhibitory neuron types influence EEG rhythms differently [3] and possess varied neurovascular coupling properties [4], a cortical microcircuit model incorporating multiple inhibitory neuron types would offer a promising framework for investigating local neural dynamics underlying EEG rhythms and their relationship with BOLD signals.

Methods
We developed a cortical microcircuit model that incorporates excitatory (E) and inhibitory (PV, SOM, and VIP) populations across cortical layers (L2/3, L4, L5, and L6) with realistic configurations, including connection probabilities, synaptic strengths, neuronal densities, and firing rate functions for each neuron type. The model receives three types of external inputs: (1) lateral input, (2) modulatory input, and (3) thalamic input. We characterized the spectral properties of EEG rhythms across a range of external inputs, explored EEG-BOLD correlations under constant and varying input conditions, and analyzed how neuronal populations contribute to EEG rhythms and the EEG-BOLD correlation.
Results
The model generates EEG rhythms, with increased power in the alpha (8-12 Hz), beta (13-30 Hz), and gamma bands (30-50 Hz) at low modulatory input and increased delta (0.5-4 Hz) and theta (4-7 Hz) powers at high modulatory input. We found low-frequency EEG activity (from delta to low beta band) was driven more strongly by infragranular than supragranular populations. Conversely, supragranular populations drive high-frequency EEG activity (high beta and gamma band) more intensely. As to EEG-BOLD correlations, we found that alpha-BOLD correlation is almost exclusively driven by fluctuations (i.e., standard deviation of firing rates) in infragranular populations, with little contribution from the supragranular layer.

Discussion
Our cortical microcircuit model generates EEG rhythms based on a generic mechanism involving the nonlinear amplification and filtering of synaptic noise. Our investigation focused on different forms of long-range external input, which targets distinct neuronal populations. The model could be used to help design optimal stimulation protocols for various applications, including the effect of specific neuronal populations on EEG and BOLD.




Acknowledgements
The publication was supported by a Lumina-Quaeruntur fellowship (LQ100302301) by the Czech Academy of Sciences (awarded to HS) and ERDF-Project Brain Dynamics, No. CZ.02.01.01/00/22_008/0004643. We acknowledge the core facility MAFIL supported by the Czech-BioImaging large RI project (LM2018129 funded by MEYS CR) for their support in obtaining scientific data presented in this work.
References
[1]https://doi.org/10.1016/j.brainresrev.2009.12.004
[2]https://doi.org/10.1016/j.cub.2018.11.052
[3]https://doi.org/10.1016/j.tins.2003.09.016
[4]https://doi.org/10.1523/JNEUROSCI.3065-04.2004
Speakers
HS

Helmut Schmidt

Scientific researcher, Institute of Computer Science, Czech Academy of Sciences
JH

Jaroslav Hlinka

Senior researcher, Institute of Computer Science of the Czech Academy of Sciences
Currently                                I am leading the COBRA working group and also serve as the Head of the Department of Complex Systems and as the Chair of the Council of the Institute of Computer Science of the Czech Academy of Sciences.Brief bio After obtaining master degrees in Psychology from Charles University (2005) and in Mathematics from Czech Technical University (2006), I went on the quest of applying mathematics in helping to understand the complex activity of human bra... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P046: Model Parameter Estimation for TMS-induced MEPs
Sunday July 6, 2025 17:20 - 19:20 CEST
P046 Model Parameter Estimation for TMS-induced MEPs

Shih-Cheng Chien*1, Christian Röse2, Peng Wang2,3, Helmut Schmidt1, Jaroslav Hlinka1,4, Thomas R. Knösche2, Konstantin Weise2


1Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic
2Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
3Institute of Psychology, University of Greifswald, Greifswald, Germany
4National Institute of Mental Health, Klecany, Czech Republic

*Email:chien@cs.cas.cz

Introduction

TMS-induced MEPs are widely utilized in both basic research and clinical practice. The MEP parameters, such as input-output (I/O) curves, often exhibit significant variability across individuals, both in healthy populations and in patients. Understanding the sources of this variability is critical for improving the precision of motor-related diagnoses. Previously, we developed a biologically inspired model capable of reproducing MEP waveforms. In this study, we apply model fitting to an open MEP dataset of ten healthy participants [1] and investigate the distribution of model parameters underlying the variability of I/O curves.

Methods
The model incorporates the descending motor pathways from the spinal cord to the hand muscles, with synthetic D- and I-waves serving as inputs. The spinal cord component consists of 100 conductance-based leaky integrate-and-fire alpha motor neurons (aMNs), which interact with a population of Renshaw cells (RCs) that function as a common inhibitory pool. The aMNs are connected to 100 motor units in the hand muscle component. Each motor unit generates a motor unit action potential (MUAP) in response to spikes from its corresponding aMN. The simulated MEP is computed as the sum of these time-shifted MUAPs.
Results
The resting motor threshold (RMT) across individuals in the dataset was 41.3 ± 6.0% of the maximum stimulator output (MSO). Peak latencies showed no significant variation with MEP peak-to-peak amplitude. Fitting the model to individual MEP waveforms provided insights into the neuronal interactions underlying MEP generation. The DI-waves, after convolution with synaptic kernels (AMPA and NMDA), produced sustained inputs to the αMNs. Renshaw cells played a critical role in suppressing excessive spikes, particularly under high TMS intensities, preventing excessive oscillations in the MEP waveform.
Discussion
We employed a computationally efficient and biologically plausible model to explain the variability in individual TMS-induced MEPs. The fitting procedure relied on synthesizing common DI-waves for healthy participants, which may introduce additional errors in parameter estimation. Future work will validate this approach using patient data, where individual DI-waves are available, to improve accuracy and robustness in parameter fitting.




Acknowledgements

The publication was supported by a Lumina-Quaeruntur fellowship (LQ100302301) by the Czech Academy of Sciences (awarded to HS) and ERDF-Project Brain Dynamics, No. CZ.02.01.01/00/22_008/0004643.
References
[1]https://doi.org/10.1016/j.brs.2022.06.013
Speakers
HS

Helmut Schmidt

Scientific researcher, Institute of Computer Science, Czech Academy of Sciences
JH

Jaroslav Hlinka

Senior researcher, Institute of Computer Science of the Czech Academy of Sciences
Currently                                I am leading the COBRA working group and also serve as the Head of the Department of Complex Systems and as the Chair of the Council of the Institute of Computer Science of the Czech Academy of Sciences.Brief bio After obtaining master degrees in Psychology from Charles University (2005) and in Mathematics from Czech Technical University (2006), I went on the quest of applying mathematics in helping to understand the complex activity of human bra... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P047: Graph Analysis of EEG Functional Connectivity during Lie Detection
Sunday July 6, 2025 17:20 - 19:20 CEST
P047 Graph Analysis of EEG Functional Connectivity during Lie Detection

Yun-jeong Cho1, Hoon-hee Kim*2

1 Department of Data Engineering, Pukyong National University, Busan, South Korea
2Department of Computer Engineering and Artificial Intelligence, Pukyong National University, Busan, South Korea

*Email: h2kim@pknu.ac.kr

Introduction

Lie detection is an important research topic in various fields, including psychology, forensic science, and neuroscience, as it involves complex processes. By measuring the brain's functional connectivity using EEG data and calculating the graph-theoretical metrics (e.g., the average clustering coefficient), it is possible to quantitatively assess the changes in brain network dynamics between lie and truth conditions [1]. In this study, we aimed to analyze the overall brain connectivity differences between lie and truth conditions by computing inter-channel coherence and network metrics within a specific frequency band during the answer phase.
Methods
Among 12 subjects, participants were divided into two groups —those who consistently lied and those who consistently told the truth. After excluding two lie subjects, each group comprised five subjects. EEG data were recorded for 15 seconds while subjects answered a specific question with only the first 3 seconds after answer onset analyzed. Inter-channel coherence [2] was computed in the high-frequency range, focusing on the beta band activated during lying. A functional connectivity (FC) matrix was constructed by applying a threshold, and key metrics —such as the average clustering coefficient and global efficiency were calculated. Statistical validation was performed using t-tests and Mann-Whitney U tests.
Results
Overall, significant differences in brain network metrics were observed between the lie and truth conditions (Fig. 1). In particular, the subjects in the lie group, the average clustering coefficient was found to increase significantly than in the subjects in the truth group. Statistical analyses confirmed that these differences were significant, with a larger than expected effect size, suggesting that overall brain connectivity is altered when individuals lie. These findings support the notion that the complex cognitive processes involved in lying may lead to changes in the brain’s network organization.
Discussion
This study compared the overall brain network changes between lie and truth conditions using the average clustering coefficient computed for each subject. The results showed that the lie condition exhibited increased global brain connectivity, suggesting an additional cognitive load during lying. However, using subject-level averages limits the ability to directly assess local connectivity changes in specific brain regions, and caution is warranted in interpretation due to the small sample size. Future research should include a larger number of subjects and incorporate various network metrics, such as inter-channel analyses, to more precisely evaluate brain connectivity changes.



Figure 1. Fig 1. Topographic maps of the average clustering coefficient comparing lie (left) and truth (right) groups during the answer phase. Increased clustering (dark red) in the lie condition indicates significantly greater overall brain connectivity compared to the truth condition.
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
1. Gao J, Gu L, Min X et al. (2022). Brain Fingerprinting and Lie Detection: A Study of Dynamic Functional Connectivity Patterns of Dedeption Using EEG Phase Synchrony Analysis. IEEE Journal of Biomedical and Health Informatics, 26(2), 600-613.https://doi.org/10.1109/jbhi.2021.3095415
2. Bowyer S. (2016). Coherence a measure of the brain networks: past and present. Neuropsychiatric Electrophysiology, 2(1).https://doi.org/10.1186/s40810-015-0015-7
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P048: From Density to Void: Why Brain Networks Fail to Reveal Complex Higher-Order Structures
Sunday July 6, 2025 17:20 - 19:20 CEST
P048 From Density to Void: Why Brain Networks Fail to Reveal Complex Higher-Order Structures

Moo K. Chung*1,Anass B El-Yaagoubi2, Anqi Qiu3, Hernando Ombao2


1Department of Biostatistics and Medical informatics, University of Wisconsin, Madison, USA
2Statistics Program, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
3Department of Health Technology and Informatics, Hong Kong, China


*Email:mkchung@wisc.edu

Introduction

In brain network analysis using resting-state fMRI, there is growing interest in modeling higher-order interactions—beyond simple pairwise connectivity—using persistent homology [1]. Despite the promise of these advanced tools, robust and consistently observed time-evolving higher-order interactions remain elusive. In this study, we examine why conventional analyses often fail to reveal complex higher-order structures—such as interactions involving four or five or more nodes —and explore whether higher-order interactions truly exist in functional brain networks.

Methods

We apply persistent homology to analyze correlation networks over a range of thresholds h. A simplicial complex is constructed from the connectivity matrix c(i,j) where nodes (0-simplices) represent individual time series and edges (1-simplices) are included if c(i,j) > h. For triangles (2-simplices), a simplex is formed if all three pairwise connections among a triplet of nodes exceed the threshold h. Higher-order simplices are defined analogously. We then examine the consistency of these higher-order topological features across time and subjects by quantifying the probability of overlap in the persistent features.


Results

Our preliminary analysis based on rs-fMRI of 400 subjects reveals that correlation networks tend to yield either nearly complete graphs or highly fragmented structures, neither of which exhibit robust higher-order topological features. As the number of nodes involved in an interaction increases, the probability that multiple brain regions activate simultaneously decays exponentially, as observed in both empirical data and theoretical models. These findings indicate that resting-state fMRI predominantly reflects pairwise interactions, with only infrequent occurrences of three-node interactions. Nonetheless, even these predominant pairwise interactions are highly intricate, giving rise to complex network dynamics characterized by lower-dimensional topological profiles such as 0D (connected components) and 1D (cycles) features [2].

Discussion
Our results indicate that conventional connectivity analyses are limited in detecting robust higher-order interactions, as they often yield networks that are either overly dense or fragmented, masking subtle connectivity patterns. Alternative metrics, such as mutual information or entropy, may better capture the nonlinear, multiscale dependencies among brain regions [3]. Notably, higher-order interactions are not exclusively defined by multi-node connectivity; even pairwise interactions can become highly complex when organized into cycles or spiral patterns over time. Future work should integrate these alternative measures with persistent homology to reveal hidden connectivity patterns, ultimately enhancing our understanding of functional brain organization.





Figure 1. Left: Graph representation of pairwise interactions between nodes in a brain network. Right: Higher-order interactions depicted with colored simplices—yellow for 3-node (triangle) interactions and blue for 4-node (tetrahedron) interactions.
Acknowledgements
NIH grants EB028753, MH133614 and NSF grant MDS-201077

References
[1] El-Yaagoubi, A.B., Chung, M.K., Ombao. H. (2023). Topological data analysis for multivariate time series data.Entropy,25(11), 1509.
[2]Chung, M.K., Ramos, C.G., De Paiva, F.B., Mathis, J., Prabhakaran, V., Nair, V.A., Meyerand, M.E., Hermann, B.P., Binder, J.R. and Struck, A.F., 2023. Unified topological inference for brain networks in temporal lobe epilepsy using the Wasserstein distance.NeuroImage,284, p.120436.
[3] Li, Q., Steeg, G. V., Yu, S., Malo, J. (2022). Functional connectome of the human brain with total correlation.Entropy,24(12), 1725.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P049: Planning and hierarchical behaviors in homeostatic optimal control
Sunday July 6, 2025 17:20 - 19:20 CEST
P049 Planning and hierarchical behaviors in homeostatic optimal control

Simone Ciceri*1,2, Atilla-Botond Kelemen1,3, Henning Sprekeler1,3,4


1Modelling of Cognitive Processes, Technical University of Berlin, Berlin, Germany
2Charité–Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany
3Bernstein Center for Computational Neuroscience, Berlin, Germany
4Science of Intelligence, Research Cluster of Excellence, Berlin, Germany

*Email:simone.ciceri@tu-berlin.de

Introduction
Animal survival depends on the ability to maintain the stability of a set of internal variables, such as nutrient levels or water balance. This internal regulation, known as homeostasis, often requires the acquisition of resources via interactions with the external environment [1]. We reasoned that competition among multiple homeostatic needs combined with a rich environment may be sufficient to explain a wide range of complex behaviors. To test this hypothesis, we developed a control-theoretic problem setting for an agent that aims to preserve homeostasis of multiple internal variables while foraging in environments with distributed resources.

Methods
We model a synthetic agent that actively forages to minimize deviations of its internal variables from their respective set points, which reflect its individual demands. These variables gradually decay over time but can be replenished by collecting resources from the environment. The resources are distributed around the environment to generate competition among different needs. We study the foraging behavior that results from minimizing a cost function that combines homeostatic errors and motion costs. In simple 1D environments, we obtain optimal behavioral policies using optimal control methods. In 2D settings, we parametrize the policies with artificial neural networks that are optimized using evolutionary algorithms.

Results
We show that internal homeostasis can generate a rich repertoire of behaviors that depend on both the structure of the environment and internal demands. First, when resources are sparse the agent displays planning strategies, such as stocking up on one variable before foraging for others. Second, agent behaviors can be decomposed into a small set of simpler policies, each of which satisfies one internal need. The agent hierarchically selects from this set of behaviors based on its internal state. Finally, optimal strategies can be highly sensitive to the agent's demands. In the same environment, we can observe sudden transitions between different behaviors when changing the set point at which the internal variables need to be maintained.

Discussion
Our model demonstrates the possible emergence of complex behavior from the simple goal of internal stability. Optimal foraging strategies are shaped by both environmental factors and internal demands, potentially accounting for the large variability often observed among individuals of the same species, even within the same environment. Our model also emphasizes how strongly the dynamics of the internal state—which are generally not accessible in behavioral experiments—are mirrored in the agent's behavior. The relevance of these findings is not confined to behavioral modeling and analysis: it is likely that the neural activity that drives animal behaviors will be similarly sensitive to the internal state of the animal.





Acknowledgements
-
References
● Woods, S. C., & Ramsay, D. S. (2007). Homeostasis: Beyond Curt Richter. Appetite, 49(2), 388-398.https://doi.org/10.1016/j.appet.2006.09.015


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P050: Linking biological context to models with ModelDB
Sunday July 6, 2025 17:20 - 19:20 CEST
P050 Linking biological context to models with ModelDB

InessaCohen1,XinchengCai2,MengmengDu2,YitingKong2,HongyiYu2,Robert A. McDougal*1,2,3,4
1Program in Computational Biology and Biomedical Informatics, Yale University, New Haven, CT, USA
2Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
3Department of Biomedical Informatics and Data Science, YaleSchool of Medicine, New Haven, CT, USA
4Wu Tsai Institute, Yale University, New Haven, CT, USA


*Email: robert.mcdougal@yale.edu
Introduction

ModelDB (https://modeldb.science) was founded almost 30 years ago to address the challenges of reproducibility in computational neuroscience, promote code reuse, and facilitate model discovery. It has grown to hold the source code for ~1,900 published studies. Recent enhancements, presented here, have focused on expanding its model collection and improving its biological context. However, discoverability and interpretability depend on having reliable metadata for entire models and their components. To address this, we sought to use machine learning (ML) to classify ion channel subtypes based on source code, identify key predictors, and compared results to those from a large language model (LLM).
Methods
We applied manual and automatic techniques to increase the biological context displayed when exploring ModelDB as well as increased the visibility of existing data. Network model properties and some file-level ion channel types were manually annotated. Biology-focused explanations of model files were generated automatically by an LLM. Features were extracted using a rule-based approach from NEURON [1] MOD files (a common format of ion channel and receptor model components) after deduplication that ignored white space and comments. Five-fold cross-validation was used to assess ML predictions. Subsets of model code from many files and a controlled vocabulary were provided to an LLM to generate whole-model metadata which was assessed manually.
Results
We have updated the ModelDB website to support more types of models and to pair browsing models and files with biological and computational context. The ML classifier identified a number of features (state count, nonspecific currents, using common ions) as key for predicting ion channel type. It worked well for identifying broad channel types but struggled with more granular subtype identification which had few examples in our training set. Calcium-activated potassium channels were one of the best performing subtypes. ML results were compared with those from an LLM and from rule-based approaches. LLM performance on whole model metadata prediction from source code was highly dependent on the broad category of metadata.
Discussion
ModelDB has long prioritized connecting models to biology, from its days as part of the SenseLab project, where its sister-site NeuronDB [2] once gathered compartment-level channel expression data. Many model submitters now chose to contribute an “experimental motivation” when submitting new models. Biology and model code are both often unclear on what should count as “the same,” posing challenges for both manual and automated metadata assignment. Nevertheless, it is our hope that pairing code with enriched biological context will make computational models more accessible, interpretable, and reusable.



Acknowledgements
We thankRui Lifor curatingModelDBmodel network metadata.
References
1.Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment.Neural computation, 9(6), 1179-1209.https://doi.org/10.1162/neco.1997.9.6.1179
2.Mirsky, J. S., Nadkarni, P. M., Healy, M. D., Miller, P. L., & Shepherd, G. M. (1998). Database tools for integrating and searching membrane property data correlated with neuronal morphology.Journal of neuroscience methods, 82(1), 105-121.https://doi.org/10.1016/S0165-0270(98)00049-1
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P051: Dynamical systems principles underly the ubiquity of neural data manifolds
Sunday July 6, 2025 17:20 - 19:20 CEST
P051 Dynamical systems principles underly the ubiquity of neural data manifolds

Isabel M. Cornacchia*1, Arthur Pellegrino*1,2, Angus Chadwick1

1 Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK
2Gatsby Computational Neuroscience Unit, School of Life Sciences, University College London, UK


*Email: isabel.cornacchia@ed.ac.uk, a.pellegrino@ucl.ac.uk

Introduction

The manifold hypothesis posits that low-dimensional geometry is prevalent in high-dimensional data. In neuroscience, such data emerge from complex interactions between neurons, most naturally described as dynamical systems. While these models offer mechanistic descriptions of the processes generating the data, the geometric perspective remains largely empirical, relying on dimensionality reduction methods to extract manifolds from data. The link between the dynamic and geometric views on neural systems therefore remains an open question. Here, we argue that modelling neural manifolds in a differential geometric framework naturally provides this link, offering insights into the structure of neural activity across tasks and brain regions.

Methods
In this work, we argue that many manifolds observed in high-dimensional neural systems emerge naturally from the structure of their underlying dynamics. We provide a mathematical framework to characterise the conditions for a dynamical system to be manifold-generating. Using the framework, we verify in datasets that such conditions are often met in neural systems. Next, to investigate the relationship between the dynamics and geometry of neural population activity, we apply this framework to jointly infer both the manifold and the dynamics on it directly from large-scale neural recordings.
Results
In recordings of macaque motor and premotor cortex during a reach task [1], we uncover a manifold with behaviourally-relevant geometry: neural trajectories on the inferred manifold closely resemble the hand movement of the animal, without a need to explicitly decode the behaviour. Furthermore, from 2-photon imaging of mouse visual cortex during a visual discrimination task [2], we show that neurons tracked over one month of learning have a stable curved manifold shape, despite the neural dynamics changing. In these two example datasets, we show that considering the curvature of neural manifolds and dynamics on them allows to extract more behaviourally relevant neural representations and to probe for their change over learning (Fig. 1).
Discussion
Overall, our framework offers a formal mathematical link between the geometric and dynamical perspectives on population activity, and provides a generative model to uncover task manifolds from experimental data. We use this framework to highlight how behavioural and stimulus variables are naturally encoded on curved manifolds, and how this encoding evolves over learning. This lays the mathematical groundwork for systematically modelling neural manifolds in the language of differential geometry, which can be reused across tasks and brain regions. Overall, bridging geometry and dynamics is a key step towards a unified view of neural population activity which can be used to generate and test hypotheses about neural computations in the brain.



Figure 1. a. The framework (MDDS) jointly fits the manifold and dynamics to data. b. Reach task. c. Inferred manifold and trajectories within it. d. Visual task. e. Neural representation of the angle over time. f. Variance explained by a model trained on pre-learning and (top): tested on pre-learning (bottom): tested on post-learning while refitting components, either separately or in combination.
Acknowledgements

References
1.https://doi.org/10.1016/j.neuron.2018.09.030
2.https://doi.org/10.1038/s41593-021-00914-5


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P052: Geometry and dynamics in sinusoidally perturbed cortical networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P052 Geometry and dynamics in sinusoidally perturbed cortical networks

Martina Cortada*1,2, Joana Covelo1, Maria V. Sanchez-Vives1,3

1Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Carrer de Rosselló, 149-153, 08036 Barcelona, Spain

2Facultat de Física, Universitat de Barcelona (UB), Carrer de Martí i Franquès, 1-11, 08028 Barcelona,
Spain
3Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluís Companys, 23,
08010 Barcelona, Spain
*Email: cortada@recerca.clinic.cat


Introduction
Cerebral cortex networks exemplify complex coupled systems where collective behaviorsgiverise to emergent properties.This study explores how electric fieldsinusoidal modulationimpactcortical networksexhibitingself-sustained slow oscillations (SOs), characterized by alternating neuronal silence (Down states) and activity (Up states) around 1 Hz[1,2].SOs, describedas thecorticaldefaultactivity pattern[3],arecrucial formemory consolidation,plasticityandhomeostasis[4].
Here, we aimed to understand SOs and how to control them. Specifically, how the amplitudeand frequency of sinusoidal electric fields shape emergent network states and the transitions across them?
Methods
Wevariedfrequencies and amplitudesof sinusoidal fieldson cortical networksexhibitingspontaneous SOs. These networks form a coupled system where intrinsic oscillations interact with an external periodic force. To characterize their response, we define a suitably reduced phase space in which trajectoriesemergefrom the interaction between the perturbation and the network’s activity. These trajectories are constructed by segmenting the network response into single-cycle epochs corresponding to the perturbation, mapping each oscillatory response into a structured, low-dimensional representation. The system’s behavior is then analyzed through the evolution of these trajectories within this phase space, using geometric and topological approaches.

Results
When sinusoidally perturbed, these networksexhibitdistinct qualitative behaviors shaped by the interplay between intrinsic oscillations and external driving forces. By examining thetrajectoriesrepresentingthis interplay, we found that the Euclidean distance between their start and end points distinguishes different dynamical regimes, including phase and frequencylocking, quasi-periodicityand desynchronization.
Beyond trajectory closure, the intricate patterns of these curves across stimulation conditionsindicatethe existence of multiple stable or metastable regimes, suggesting that external forcing can drive transitions between distinct attractor-like states in cortical dynamics.
Discussion
Through this analysis, we have explored how perturbations shape network responses across the parameter space. Our findings suggest that cortical networks encode these effects through the geometric structure of their dynamical trajectories, revealing patterns of entrainment and stability under electric field modulation. This framework deepens our understanding of coupled neural oscillators and how they can be controlled, which has important implications for neuromodulation strategies in clinical contexts.




Acknowledgements
Funded by INFRASLOWPID2023-152918OB-I00 funded by MICIU / AEI / 10.13039/501100011033/FEDER. Co-funded byEuropean Union (ERC, NEMESIS, project number 101071900)andDepartamentdeRecercaiUniversitatsde laGeneralitatde Catalunya (AGAUR 2021-SGR-01165), supported by FEDER.

References
[1] M.V. Sanchez-Vives.Current Opinion in Physiology, vol. 15, 2020, pp. 217–223.
[2] M.Torao-Angosto et al.Frontiers in Systems Neuroscience, vol. 15, 2021.
[3] M.V. Sanchez-Vives and M. Mattia.ArchivesItaliennesdeBiologie, vol. 158, no. 1, 2020, pp. 59–65.doi:10.12871/000398292020112.
[4] J.M. Krueger et al.Sleep Medicine Reviews, vol. 28, 2016, pp. 46–54.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P053: Modular structure-function support high-order interactions in the human brain
Sunday July 6, 2025 17:20 - 19:20 CEST
P053 Modular structure-function support high-order interactions in the human brain

Jesus M Cortes1,2,3, Borja Camino-Pontes1,4, Antonio Jimenez-Marin1,4, Iñigo Tellaetxe-Elorriaga1,4, Izaro Fernandez-Iriondo1,4, Asier Erramuzpe1,2, Ibai Diez1,2,5, Paolo Bonifazi1,2, Marilyn Gatica6,7, Fernando Rosas8,9,10,11, Daniele Marinazzo12, Sebastiano Stramaglia13,14
*Email: jesus.m.cortes@gmail.com
1Computational Neuroimaging Lab, BioBizkaia Health Research Institute, Barakaldo, Spain
2IKERBASQUE: The Basque Foundation for Science, Bilbao, Spain
3Department of Cell Biology and Histology, Faculty of Medicine and Nursing, University of the Basque Country, Leioa, Spain
4Biomedical Research Doctorate Program, University of the Basque Country, Leioa, Spain
5Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
6NPLab, Network Science Institute, Northeastern University London, London, United Kingdom.
7Precision Imaging, School of Medicine, University of Nottingham, United Kingdom.
8Department of Informatics, University of Sussex, Brighton, United Kingdom.
9Sussex Centre for Consciousness Science and Sussex AI, University of Sussex, Brighton, United Kingdom.
10Center for Psychedelic Research and Centre for Complexity Science, Department of Brain Sciences, Imperial College London, London, UK.
11Center for Eudaimonia and Human Flourishing, University of Oxford, Oxford, United Kingdom.
12Department of Data Analysis, Ghent University, Ghent, Belgium.
13Università degli Studi di Bari Aldo Moro, Bari, Italy.
14INFN, Sezione di Bari, Italy.
Introduction

The brain exhibits a modular organization across structural (SC) and functional connectivity (FC), spanning multiple scales from microcircuits to large-scale networks. While SC and FC share similarities, FC fluctuates over shorter time scales. Structure-function coupling (SFC) examines statistical dependencies between SC and FC [1], often at the link-wise level. However, modular coupling offers a multi-scale approach to understanding SC-FC interactions [2-3]. This study integrates functional MRI and diffusion-weighted imaging to investigate modular SFC and the role of high-order interactions (HOI) in functional organization.




Methods
We analyzed SC and FC from multimodal neuroimaging data, using graph-based modular decomposition to assess brain network structure. To quantify HOI, we computed O-information [4], assessing redundancy and synergy among brain regions. HOI gradients were also derived to explore the organization of these interactions [5]. We then examined the coupling between modular SC and both redundancy and synergy, identifying statistical associations that reveal how structural networks relate to functional integration and segregation.


Results & Discussion
Our findings indicate that SC is linked to both redundant and synergistic functional interactions at the modular level. SC showed both positive and negative correlations with redundancy, suggesting that stronger structural connections within a module can either amplify or reduce functional redundancy. In contrast, synergy consistently exhibited a positive correlation with SC, indicating that increased SC density promotes synergistic interactions. These results refine our understanding of structure-function relationships, highlighting how SC modulates HOI in the brain’s modular architecture.





Acknowledgements
JMC acknowledges financial support from Ikerbasque: The Basque Foundation for Science, and from Spanish Ministry of Science (PID2023-148012OB-I00), Spanish Ministry of Health (PI22/01118), Basque Ministry of Health (2023111002 & 2022111031).
References
[1]https://doi.org/10.1038/s41583-024-00846-6
[2]https://doi.org/10.1038/srep10532
[3]https://doi.org/10.1002/hbm.24312
[4]https://doi.org/10.1103/PhysRevE.100.032305
[5]https://doi.org/10.1103/PhysRevResearch.5.013025
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P054: Integrating Arbor and TVB for multi-scale modeling: a novel co-simulation framework applied to seizure generation and propagation
Sunday July 6, 2025 17:20 - 19:20 CEST
P054 Integrating Arbor and TVB for multi-scale modeling: a novel co-simulation framework applied to seizure generation and propagation

Thorsten Hater*1,Juliette Courson2, Han Lu1, Sandra Diaz Pier1, Thanos Manos2


1Jülich Supercomputing Centre, Forschungszentrum Jülich
2ETIS Lab, ENSEA, CNRS, UMR8051, CY Cergy-Paris University, Cergy, France
3Department of Computer Science, University of Warwick, Coventry, UK
*Email: t.hater@fz-juelich.de
Introduction
Computational neuroscience has traditionally focused on isolated scales, limiting our understanding of brain function across multiple levels. Microscopic models capture biophysical details of neurons, while macroscopic models describe large-scale network dynamics. However, integrating these levels into a unified framework remains a significant challenge.Methods
We present a novel co-simulation framework integratingArborandThe Virtual Brain (TVB). Arbor, a next-generation neural simulator, enables biophysically detailed simulations of single neurons and networks [1], while TVB models whole-brain dynamics based on anatomical connectivity [2]. Our framework employs anMPI intercommunicatorfor real-time bidirectional interaction, converting discrete spikes from Arbor into continuous activity in TVB, and vice versa. This approach allows for the replacement of TVB nodes with detailed neuron populations, enabling multi-scale modeling of brain dynamics.Results
To demonstrate the framework’s capabilities, we conducted a case study on seizure generation at the neuronal level and its whole-brain propagation [3,4]. The Arbor-TVB co-simulation successfully captured the emergence of seizure activity in single neurons and its large-scale spread across the brain network, highlighting the feasibility of integrating micro- and macro-scale dynamics.Discussion
The Arbor-TVB framework provides a comprehensive computational tool for studying neural disorders and optimizing treatment approaches. By capturing interactions across spatial scales, this method enhances our ability to investigate how local biophysical mechanisms influence global brain states. This multi-scale approach advances research in computational neuroscience, offering new possibilities for therapeutic testing and precision medicine interventions for neurological disorders.





Acknowledgements
.
References
[1]doi:10.1109/empdp.2019.8671560
[2]doi:10.3389/fninf.2013.00010
[3]doi:10.1523/jneurosci.1091-13.2013
[4]doi:10.1007/s10827-022-00811-1
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P055: Risk sensitivity modulates impulsive choice and risk aversion behaviors
Sunday July 6, 2025 17:20 - 19:20 CEST
P055 Risk sensitivity modulates impulsive choice and risk aversion behaviors

Rhiannon L. Cowan*1, Tyler S. Davis1, Bornali Kundu2, Ben Shofty1, Shervin Rahimpour1, John D. Rolston3, Elliot H. Smith1

1Department of Neurosurgery, University of Utah, Salt Lake City, United States
2Department of Neurosurgery, University of Missouri – Columbia, Missouri, United States
3Department of Neurosurgery, Brigham & Women’s Hospital, Boston, United States

*Email: rhiannon.cowan@utah.edu


Introduction
Impulsivity is a multifaceted psychological construct that may impede optimal decision-making. Impulsive choice (IC) is the tendency to favor smaller, immediate, or more certain rewards over larger, delayed, or uncertain rewards [1]. A strategy such as risk aversion allows individuals to avoid potential loss of reward and gain instant gratification [2,3]. Risk sensitivity (RS) is defined as the variance associated with an outcome [4] and, therefore, may be examined via positive and negative prediction error (PE) signals, a canonical signal of reinforcement learning [5-7]. We posit that more impulsive individuals will exhibit risk-aversive tendencies, which will be observed via suboptimal performance and neural encoding of negative PEs.

Methods
71 neurosurgical epilepsy patients underwent implantation of electrodes into the cortex and deep brain structures. The Balloon Analog Risk Task (BART) is a useful paradigm to measure impulsivity and reward behaviors by conceptualizing the probability of potential reward [8]. Subject IC level was calculated as the difference between passive and active trial inflation time (IT) distributions. Outcome-aligned broadband high frequency (HFA; 70-150Hz) activity was modeled as a linear combination of temporal difference (TD) variables [5,9,10]. We compute the neural correlates of reward in a trial-by-trial manner from TD models with optimal learning rates [11] and RSTD models, which account for positive and negative PEs [12].
Results
MI choosers were more accurate than LI choosers (Z=2.04, p=.041), notably for yellow balloon trials (Z=4.09, p<.0001), yet LI choosers overall gained more points (Z=-3.57, p=.00036) primarily from yellow balloons (Z=-3.58, p=.00036; Fig.1). We observed no differences in optimal learning rates for reward or risk models between groups (p’s>.05) but saw increased RS was correlated with impulsivity (t(69)=-2.17, p=.03). We observed greater encoding of negative PEs (11.46%) than positive PEs (25.06%; χ2=159, p<.001). However, a group-level dichotomy revealed that MI choosers encoded significantly more negative RPEs (MI=11.42%, LI=9.45%; χ2=5, p=.025), whereas LI choosers encoded more positive PEs (χ2=4, p=.039).

Discussion
We utilize a dataset of 7000 intracranial electrodes to model RS and the neural underpinnings of IC. During BART, we found that LI choosers took more risks, leading to more optimal performance, while MI chooser’s accuracy-point tradeoff suggests a risk-aversion strategy, that aligns with the IC definition. Neurally, MI choosers encoded more negative PEs, and LI choosers encoded more positive PEs, which, in tandem with the differential behavioral strategies exhibited, suggests RS drives reward-seeking and may be modulated by impulsivity. This supports previous studies showing associations of positive PEs to risk-seeking behavior and negative PEs to risk-aversion behavior [13]. These findings have implications for decision-making, RS, and IC.





Figure 1. Figure 1. A. BART schematic B&C. IC scatter & histogram using Z-Value difference between active & passive ITs (apZVals) D. Accuracy by color E. Points by color F. LI & MI point distributions G-I. Regression plots: performance vs IC J. Glass brain of electrodes K&L. LI & MI regions encoding negative PE & positive PE M&N. LI & MI risk PE signals by trial category O. Risk sensitivity vs impulsivity.
AcknowledgementsThis research was supported by funding: R01MH128187
References
1. https://doi.org/10.1097/01.wnr.0000132920.12990.b9
2. https://doi.org/10.3389/fpsyg.2015.0051
3. https://doi.org/10.1016/j.bbr.2018.10.008
4. https://doi.org/10.1523/JNEUROSCI.5498-10.2012
5. https://doi.org/10.1016/j.neuron.2006.06.024
6. https://doi.org/10.1038/s41586-019-1924-6
7. https://doi.org/10.31887/DCNS.2016.18.1/wschultz
8. https://doi.org/10.1037//1076-898X.8.2.75
9. https://doi.org/10.1523/JNEUROSCI.2041-09.2009
10. https://doi.org/10.1523/JNEUROSCI.2770-10.2010
11. doi.org/10.1109/TNN.1998.712192
12. https://doi.org/10.1023/A:1017940631555
13. https://doi.org/10.1371/journal.pcbi.1009213

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P056: A numerical simulation of neural fields on curved geometries
Sunday July 6, 2025 17:20 - 19:20 CEST
P056 A numerical simulation of neural fields on curved geometries

Neekar Mohammed, David J. Chappell,Jonathan J. Crofts*
Department of Physics and Mathematics, Nottingham Trent University, Nottingham, UK

*Email: jonathan.crofts@ntu.ac.uk


Introduction
Brainwaves are crucial for information processing, storage, and sharing [1]. While a plethora of computational studies exist, the mechanisms behind their propagation remain unclear. Current models often simplify the cortex as a flat surface, ignoring its complex geometry [2]. In this study, we incorporate realistic brain geometry and connectivity into simulations to investigate how brain morphology influences wave propagation. Our goal is to leverage this approach to elucidate the relationship between increasing mammalian brain convolution, brain evolution, and its consequential impact on cognition.

Methods
To achieve efficient modelling of large-scale cortical structures, we have extended isogeometric analysis (IGA) [3], a powerful tool for physics-based engineering simulations, to the complex nonlinear integro-differential equation models found in neural field models. IGA utilises non-uniform rational B-splines (NURBS), the standard for geometry representation in computer-aided design, to approximate solutions. Specifically, we will employ isogeometric collocation (IGA-C) methods, leveraging the high accuracy of NURBS with the computational efficiency of collocation. While IGA-C has proven effective for linear integral equations in mechanics and acoustics, its application to nonlinear NFMs represents a significant advancement.
Results
To enable more realistic brain simulations, we have developed a novel IGA-C method that directly utilises point cloud data and bypasses triangular mesh generation, allowing for the solution of partial integro-differential equation models of neural activity on complex cortical-like domains. Here, we demonstrate the method's capabilities by studying both localised and traveling wave activity patterns in a two-dimensional neural field model on a torus [4]. The model offers a significant computational advantage over standard mesh-dependent methods and, more importantly, provides a crucial framework for future research into the role of cortical geometry in shaping neural activity patterns via its ability to incorporate complex geometries.
Discussion
This work presents a novel numerical procedure for integrating neural field models on arbitrary two-dimensional surfaces, enabling the study of physiologically realistic systems. This includes, for example, accurate cortical geometries and connectivity functions that capture regional heterogeneity. Future research will focus on elucidating the influence of curvature on the nucleation and propagation of travelling wave solutions on cortical geometries derived from imaging studies.




Acknowledgements
NM, DJC and JJC were supported through the Leverhulme Trust research project grant RPG-2024-114
References
1.https://doi.org/10.1038/nrn.2018.20
2.https://doi.org/10.1007/s00422-005-0574-y
3.https://doi.org/10.1016/j.cma.2004.10.008
4.https://doi.org/10.1007/s10827-018-0697-5
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P057: Biologically Interpretable Machine Learning Approaches for Analyzing Neural Data
Sunday July 6, 2025 17:20 - 19:20 CEST
P057 Biologically Interpretable Machine Learning Approaches for Analyzing Neural Data

Madelyn Esther C. Cruz*1,2, Daniel B. Forger1,2,3

1Department of Mathematics, University of Michigan, Ann Arbor, MI, USA
2Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
3Michigan Center for Interdisciplinary and Applied Mathematics, University of Michigan, Ann Arbor, MI, USA

*Email:mccruz@umich.edu
Introduction

Deep neural networks (DNNs) often achieve impressive classification performance, but they operate as "black boxes,” making them challenging to interpret [1]. They may also struggle to capture the dynamics of time-series data, such as electroencephalograms (EEGs), because of their indirect handling of temporal information. To address these challenges, we explore the use of Biological Neural Networks (BNNs), machine learning models inspired by the brain’s physiology, on neuronal data. By leveraging biophysical neuron models, BNNs offer better interpretability by closely modeling neural dynamics, providing insights into how biological systems generate complex behavior.
Methods
This study applies backpropagation to networks of biophysically accurate mathematical neuron models to develop a BNN model. Specifically, we define a BNN architecture using modified versions of the Hodgkin–Huxley model [2] and integrate this within traditional neural network algorithms. These BNNs are then used to classify both EEG and non-EEG signals, generate EEG signals to predict brain states, and analyze EEG neurophysiology through model-derived parameters. We also compare the performance of our BNN architecture to those of traditional neural networks.
Results
Our BNNs demonstrate strong performance in classifying handwritten digits from the MNIST Digits Dataset, learning faster than traditional neural networks. The same BNN architecture also excels on time-series neuronal datasets, effectively distinguishing EEG recordings and power spectral densities associated with alertness vs. fatigue, varying consciousness levels, and different workloads. Additionally, we trained our BNNs to exhibit different frequencies observed in EEG recordings and found that the variability of synaptic weights and applied currents increased with the target frequency range.
Discussion
Analyzing gradients from backpropagation in BNNs reveals similarities between their learning mechanisms and Hebbian learning in the brain in terms of how synaptic weights change the loss function and how changing the weights at specific time intervals impact learning. In particular, synaptic weight updates occur only when presynaptic or postsynaptic neurons fire [3]. This results in fewer parameter changes during training compared to DNNs while still capturing temporal dynamics, leading to improved learning efficiency and interpretability. Overall, applying backpropagation to accurate ordinary differential equation models enhances neuronal data classification and interpretability while providing insights into brain learning mechanisms.



Acknowledgements
We acknowledge the following funding: ARO MURI W911NF-22-1-0223 to DBF.
References
● http://doi.org/10.1038/nature14539
● https://doi.org/10.1007/s00422-008-0263-8
● https://doi.org/10.1016/j.neucom.2014.11.022


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P058: Homeostatic memory engrams in mesoscopic structural connectomes
Sunday July 6, 2025 17:20 - 19:20 CEST
P058 Homeostatic memory engrams in mesoscopic structural connectomes

Fabian Czappa(1, *)
Marvin Kaster (1)

Marcus Kaiser (2)
Markus Butz-Ostendorf (1,3)
Felix Wolf (1)

(1) Laboratory for Parallel Programming, Department of Computer
Science, Technical University of Darmstadt, Hochschulstraße. 10,
Darmstadt, 64285, Hesse, Germany


(2) Translational Neuroimaging, Faculty of Medicine & Health Sciences,
University of Nottingham, NG7 2RD, Nottingham, United Kingdom

(3) Translational Medicine and Clinical Pharmacology, Boehringer Ingelheim Pharma GmbH & Co. KG Birkendorfer Straße 65 88397 Biberach/ Riss, Baden-Wuerttemberg, Germany


(*)fabian.czappa@tu-darmstadt.de
Introduction

Memory engrams are defined as physical traces of memory [1]. However, their actual representation in the brain is still unknown. To shed light on the underlying mechanisms, we simulate the formation of memories in healthy human subjects based on connectomes extracted from their DT-MRI brain scans. To prepare the networks for learning, we first bring them into a state of homeostatic equilibrium, leaving their topology largely intact [2]. Once this homestatization is complete, we perform a memory experiment by stimulating groups of neurons and observing memory formation as the network changes its structure to maintain equilibrium [3]. After our "thought experiment", we can precisely locate the memory engram in the connectome.
Methods
We use the Model of Structural Plasticity (MSP) [4], which grows and retracts synaptic elements based on a homeostatic rule. When a neuron searches for a partner, it chooses one based on the number of free synaptic elements and a distance-dependent probability kernel. We augment the original kernel at longer distances, giving preference to the vicinity of established synapses. After homeostatizing the structural connectome with the augmented MSP, we select a group of concept cells (CC) from the middle temporal lobe and two groups of neurons C1 and C2 scattered outside this region. We then perform a Hebbian learning experiment, associating CC with C1. We perform our experiments using data from n=7 healthy human subjects [5].
Results
Homeostatizing the connectome brings the node-degree distribution from a power-law to a normal distribution, yet we keep many distinguishing features of the network. The (geometric) axon-length histogram, the small-worldness, and the assortativity – among others – are comparable between the scanned connectome and the homeostatized one. Furthermore, we see that we form a memory engram after picking neurons for CC, C1, and C2 and stimulating CC and C1 together. Testing with n=7 high-resolution connectomes, we see that the memory engram is located in specific brain areas such as the inferior parietal lobule (7 times), the superior temporal lobe (7 times), but only sometimes in the fusiform gyrus (4 times); see Figure 1 for details.
Discussion
For the first time, it is now possible to conduct brain simulations based on individual brain scans without parameter fitting. Using MSP-generated avatar connectomes of healthy subjects that were topologically similar to the original tractograms, our method ensured the functioning of model neurons in a physiological regime, which was the necessary precondition for the learning experiments. The proposed approach is the starting point of various testable and personalized brain simulations, from designing novel stimulation protocols for transcranial stimulations (TMS, tDCS) to innovative AD models exploring the causal relationship between homeostatic imbalance, network decay, and cognitive decline.




Figure 1. Caption: We evaluate our model on n=7 high-resolution structural connectomes of healthy adults. Shown here are the number of connectomes in which US created an engram within the C1/C2 group within the area. Our criterion is that the firing frequency of the readout neuron is larger than three times the standard deviation of its usual firing frequency.
AcknowledgementsThe authors thank the German Federal Ministry of Education and Research and the Hessian Ministry of Science and Research, Art and Culture for supporting this work as part of the NHR funding. Moreover, the authors acknowledge the computing time provided to them on the HPC Lichtenberg II at TU Darmstadt, funded by the German Federal Ministry of Education and Research and the State of Hesse.


References
[1]https://doi.org/10.1016/s0361-9230(99)00182-3
[2] https://doi.org/10.1016/j.neuroimage.2009.10.003
[3]https://doi.org/10.3389/fninf.2024.1323203
[4]https://doi.org/10.1371/journal.pcbi.1003259
[5] https://doi.org/10.1002/hbm.25464


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P059: Modeling Cholinergic Heterogeneity in Whole-Brain Cortical Dynamics: Bridging Local Circuits to Global State Transitions
Sunday July 6, 2025 17:20 - 19:20 CEST
P059 Modeling Cholinergic Heterogeneity in Whole-Brain Cortical Dynamics: Bridging Local Circuits to Global State Transitions

Leonardo Dalla Porta*1, Jan Fousek2, Alain Destexhe3, Maria V. Sanchez-Vives1,4

1Institute of Biomedical Research August Pi i Sunyer (IDIBAPS), Barcelona, Spain
2Central European Institute of Technology (CEITEC), Masaryk University, Brno, Czech Republic
3Institute of Neuroscience (NeuroPSI), Paris-Saclay University, Paris, France
4Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Spain


*Email: dallaporta@recerca.clinic.cat

Introduction
The wake-sleep cycle consists of fundamentally distinct brain states, including slow-wave sleep (SWS) and wakefulness. During SWS, the cerebral cortex exhibits large, low-frequency fluctuations that propagate as traveling waves [1]. In contrast, wakefulness is characterized by the suppression of low-frequency activity and the emergence of asynchronous, irregular dynamics. While neurotransmitters such as acetylcholine (ACh) are known to regulate the wake-sleep cycle, the mechanisms by which local neuronal interactions give rise to large-scale brain activity patterns are still an open question [2].
Methods
Here, we integrated local circuit properties [2] with global brain dynamics in a whole-brain model [3], constrained by human tractography and cholinergic gene expression. Using a mean-field model, cortical regions incorporated intrinsic excitatory and inhibitory neuronal properties. Connectivity among different brain regions was determined by structural tractography from the human connectome. Cholinergic heterogeneity was introduced using the Allen Human Brain Atlas [4], which quantifies transcriptional activity for over 20,000 genes. M1 and M2 muscarinic receptors, which are targets of ACh, were incorporated by adjusting local node properties, thus creating a detailed virtual brain landscape.
Results
Our model successfully replicated spontaneous slow oscillation patterns and their wave propagation properties, as well as awake-like dynamics. Heterogeneity influenced cortical properties, modulating excitability, synchrony, and the relationship between functional and structural connectivity. Additionally, we quantified global brain complexity in response to stimulation using the Perturbational Complexity Index (PCI) [5] to differentiate brain states and assess the impact of cholinergic heterogeneity on evoked activity. We observed a significant increase in complexity during awake-like states, which depended on the level of heterogeneity.
Discussion
Building on prior insights into cholinergic modulation in local circuits [2], we developed a whole-brain model constrained by muscarinic receptor distributions, bridging intrinsic neuronal properties to large-scale brain activity. Overall, our findings underscore the impact of cholinergic heterogeneity on global brain dynamics and transitions across brain states, shaping the spatiotemporal complexity of neural patterns and functional interactions across cortical areas. Moreover, our approach also offers a pathway to studying the role of various neuromodulators involved in brain state regulation.



Acknowledgements
EU H2020 No. 945539 (Human Brain Project SGA3); INFRASLOW PID2023-152918OB-I00 funded by MICIU / AEI / 10.13039/501100011033/FEDER, UE; ERC SyG grant NEMESIS 101071900
References
1.https://doi.org/10.1523/jneurosci.1318-04.2004
2.https://doi.org/10.1371/journal.pcbi.1011246
3.https://doi.org/10.3389/fncom.2022.1058957
4.https://doi.org/10.1038/nature11405
5.https://doi.org/10.1126/scitranslmed.3006294
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P060: Neural Dynamics and Non-Linear Integration: Computational Insights for Next-Generation Visual Prosthetics
Sunday July 6, 2025 17:20 - 19:20 CEST
P060 Neural Dynamics and Non-Linear Integration: Computational Insights for Next-Generation Visual Prosthetics

Tanguy Damart*1, Jan Antolík1

1Faculty of Mathematics and Physics, Charles University, Prague, The Czech Republic

*Email: tanguy.damart@protonmail.com
Introduction

Eliciting percepts akin to natural vision using brain computer interfaces is the holy grail of vision prosthetics. However, progress has been slowed by our lack of understanding of how external perturbations, such as electrical stimulation via multi-electrode arrays (MEAs), might perturb the recurrent cortical dynamics and engage the inherent visual representations embedded in the cortical circuitry. Furthermore, investigating these questions directly remains difficult as we rarely have the opportunity to probe the human cortexin-vivo. From this limitation and thanks to the current exponential increase in computing capabilities, modeling and simulation tools naturally came to complement experimental studies.
Methods
We present here a model of intracortical microstimulation (ICMS) applied to a model of columnar primary visual cortex (V1) [1]. The V1 model, built from point neuron models, contains functional retinotopy and orientation maps which are both essential for studying the interaction between external drives such as ICMS and structured spontaneous dynamics. The ICMS is modeled through a phenomenological representation of a MEA that, when activated, causes ectopic spikes in the surrounding cells. The model reproduces two key features of ICMS: sparse and distributed recruitment of neurons, and ectopic spike induction in activated neurons.
Results
We demonstrate that our model reproduces the stereotypical dynamics in V1 seen as a response to ICMS: a transient excitation followed by a lasting inhibition. Comparing the population activity induced by ICMS to the one induced as a response to drifting gratings, we show that ICMS targeting specific orientation columns moderately biases the population activity toward a representation of this orientation. Activating multiple electrodes leads to a slight increase in that orientation bias and produces non-linear activation that could not be predicted by simply adding single-electrode effects. Finally, training a decoder model on responses of the model to natural images, we are also able to show what activity induced by ICMS looks like.
Discussion
Current visual prosthetics rely on phosphene-based encoding through intracortical microstimulation, but this approach underutilizes the complex dynamics of the visual cortex. By investigating how ICMS-induced activity in V1 relates to natural visual activity, we show that current ICMS methods are unlikely to produce anything other than phosphenes and that the non-linear spatio-temporal integrative properties of V1 could be leveraged to enhance visual prosthetic outcomes beyond the resolution limitations of current multi-electrode arrays. The computational framework we developed also enables systematic exploration of stimulation parameters without invasive procedures, such as the development of closed-loop stimulation protocols.



Acknowledgements
The publication was supported by ERDF-Project Brain dynamics, No. CZ.02.01.01/00/22_008/0004643.
References
1. https://doi.org/10.1371/journal.pcbi.1012342
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P061: A Multi-Scale Virtual Mouse Brain for Investigating Cerebellar-Related Ataxic Alterations
Sunday July 6, 2025 17:20 - 19:20 CEST
P061 A Multi-Scale Virtual Mouse Brain for Investigating Cerebellar-Related Ataxic Alterations


Marialaura De Grazia1∗, Elen Bergamo1, Dimitri Rodarie1, Alberto A. Vergani1, Egidio D’Angelo1,2, Claudia Casellato1

1Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
2Digital Neuroscience Center IRCCS Mondino Foundation, Pavia, Italy
∗Email:marialaura.degrazia01@universitadipavia.it

Introduction
Ataxias are neurodegenerative disorders commonly associated with cerebellar dysfunction, often resulting from impaired Purkinje cell (PC) function and progressive loss. In this project, we employed a spiking neural network (SNN) of a mouse olivocerebellar microcircuit, in which we incorporated key PC ataxia- related alterations. We investigated the effect of reduced dendritic arborization, cell shrinkage, and cell loss. These modifications lead to abnormal dynamics within the deep cerebellar nuclei (DCN), which project to the cerebral cortex. Our aim is to create a multiscale framework that integrates cerebellar SNNs with a virtual mouse brain model to investigate the effects of ataxic alterations on whole-brain dynamics (Fig.1A).

Methods
We built a virtual mouse brain network [2] using Allen Mouse Connectome [3] to link neural mass models (a Wong-Wang two-population model per node) on The Virtual Brain (TVB) platform (Fig.1B). Network parameters were tuned employing resting-state fMRI of 20 mice [4]. We are testing a TVB co-simulation framework [5] by replacing each cerebellar node with a cerebellar SNN. For cerebellar network reconstruction and simulation, we used the Brain Scaffold Builder (BSB) [1], which integrates the NEST simulator (Fig.1C). After validating healthy dynamics, we introduced ataxia-related alterations (reduced PC dendritic complexity, shrinkage, and density) and tested various stimulation protocols (e.g. Poisson inputs to mossy fibers from 4 to 100 Hz).


Results
The results indicate that as PC density, dendritic complexity index (DCI) and size decrease, the DCN become increasingly disinhibited due to reduced inhibitory input from PCs. The mildest network dysfunction occurs with DCI reduction alone, while more pronounced changes emerge when PCs also shrink. However, the most substantial disruptions in cerebellar dynamics arise with progressive PC density reduction (Fig.1D).Additionally, TVB global coupling and Wong-Wang model parameters were optimized for each resting-state network to maximize the match between experimental and simulated functional connectivity matrices. TVB simulations are in progress.


Discussion
Next steps will consist in further investigation of the dynamics of the ataxic cerebellar SNN, with a particular focus on exploring the electrophysiological changes within the PC model.Moreover, we are testing a TVB-NEST co-simulation framework and tuning the proxy nodes, the interface nodes between the two simulators, that enable the bidirectional conversion between spike-based and rate-coded information. This multiscale model will enhance our ability to predict and analyze alterations in large-scale brain activity and functional networks under ataxic conditions. Furthermore, it may serve as a computational tool for evaluating neuromodulation protocols (e.g. Transcranial Magnetic Stimulation) for treating cerebellar ataxias.





Figure 1. Figure 1: A. Multiscale framework for cerebellar SNN-neural mass interaction. B. TVB integrates Mouse Connectome with Wong-Wang models. C. Cerebellar network built by BSB maps SNN placement and connectivity. D. Simulating ataxia: reduced PC DCI affects granule cells to PC (via pf: parallel fibers) connectivity, PC loss impacts PC-DCNp connectivity. Testing: mossy fibers input vs. DCNp firing rate.
Acknowledgements

PRIN project 20228B2HN5 “cerebellar NEuromodulation in ATaxia: digital cerebellar twin to predict the MOVEment rescue (NEAT-MOVE)” (CUP master: F53D23005950006310)
References

1.https://doi.org/10.1038/s42003-022-04213-y
2.https://doi.org/10.1523/ENEURO.0111-17.2017

3. doi:10.1038/nature13186
4.https://doi.org/10.1038/s41467-021-26131-z
5.https://doi.org/10.1016/j.neuroimage.2022.118973


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P062: A complete computational model to explore human sound localization
Sunday July 6, 2025 17:20 - 19:20 CEST
P062 A complete computational model to explore human sound localization

Francesco De Santis1, Paolo Marzolo1,AlessandraPedrocchi1,AlbertoAntonietti1
1 Department of Electronics, InformationandBioengineering,Politecnicodi Milano, Milano, Italy
* Email:francesco.desantis@polimi.it
Introduction

Animal ability to localize sounds in space is one of the most studied aspects of hearing.Sound source position is derived from interaural time difference (ITD), interaural level difference (ILD), and spectral cues.Despite decades of auditory neuroscience research, critical questionsremainabout the neural processes supportinghuman sound localization. Its understanding isparticularlyacute for cochlear implant users, whose devices oftenfail toprovide precise spatialperception.Our aimis to address these interrogatives through the implementation of a comprehensive spiking neural network.

Methods
Themodel(depicted in Fig. 1)is composed of a peripheral section, from the sound to the spiking output of the cochlea, and a neuralsection,from the auditory nerve fibers to the superior olivary complexnuclei,developedusingBrian2Hears[1]andNEST[2]neuralsimulatorrespectively.The main inputs to the network are sounds used inin-vivoexperiments in mammals, such as pure tones at different frequencies, clicks, and white noises.To evaluate how source positionimpactedthe overall model activity, we provided stimuli of 1 s duration from different spatial positions in the frontal azimuth plane,analyzing the corresponding spike distribution and overall firing rate of all thein-silicopopulations involved.Special attention was given to the activity of the lateral and medial superior olives(LSO and MSO), two nuclei of the superior olivary complex considered to be the main players in the processing of ILDsand ITDs.
Results
The wide range of our model offered the possibility of facing various validation sites, comparing in-silico activity with different results obtained experimentallyin-vivoorin-vitro. First, allneuralpopulations showed a phase-locked spikingactivity,witha refinementforhigher-level populationsfundamental for correct ITD processing[3]. The analysis of the overall population firing rate of LSO and MSO also showed physiological plausibility, with respectively an ipsilateral-increasing and a contralateral-increasing sigmoid-like behavior in response to shifting azimuth location[3,4]. Finally, the reproduction of specific experimental setups focused on the MSO processing of ITDs showed coherent results in the effect of inhibitory input blockage[5]and in input delay manipulation on the overall MSO activity[6].
Discussion
Theimplementedcomputationalmodeladdressessome of the theoriesconcerningthe processing of sound and thecomputationofitslocationatthebrainstemlevelinhumans.Webelievethatourmodelcouldbe apromisingvalidationbase forstudyingtheeffectofcochlearimplant-generatedartificialinputs for soundlocalization,sheddinglight on thedifferentresponseof theinvolvedauditoryneuronswithrespectto arealsoundstimulation.




Figure 1. End-to-end spiking neural network
Acknowledgements
The work of AA, AP, and FDS in this research is supported by EBRAINS-Italy (European BrainReseArchINfrastructureS-Italy), granted by the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union –NextGenerationEU(Project IR0000011, CUP B51E22000150006, EBRAINS-Italy).
References

https://doi.org/10.3389/fninf.2011.00009



https://doi.org/10.4249/scholarpedia.1430



https://doi.org/10.1002/cphy.c180036



https://doi.org/10.1152/physrev.00026.2009



https://doi.org/10.1038/ncomms4790



https://doi.org/10.1523/JNEUROSCI.1660-08.2008


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P063: Dichotomous Dynamics of Hippocampal and Lateral Septum Oscillations: State-Dependent Topology and Directional Causality During Sleep and Wakefulness
Sunday July 6, 2025 17:20 - 19:20 CEST
P063 Dichotomous Dynamics of Hippocampal and Lateral Septum Oscillations: State-Dependent Topology and Directional Causality During Sleep and Wakefulness

Amir Khani1,Nima Dehghani1,2*

1 N3HUB Initiative, Massachusetts Institute for Technology, Cambridge, U.S.A.
2McGovern Institute for Brain Research, Massachusetts Institute for Technology, Cambridge, U.S.A.


*Email: nima.dehghani@mit.edu

Introduction

Sharp-wave ripples (SWRs) in the hippocampus (HPC) and high-frequency oscillations (HFOs) in the lateral septum (LS) play critical roles in memory consolidation and information routing to subcortical areas. However, the precise spatiotemporal dynamics and causal relationships between these oscillations remain poorly understood. Using multiple analytical approaches, we explored the coordination of HPC-SWR and LS-HFO oscillations during Non-Rapid Eye Movement (NREM) sleep and wakefulness, focusing on their topological features, causal relationships, and dimensional properties.
Methods
We analyzed publicly available LFP recordings from hippocampal subfields and the lateral septum in freely behaving rats [1]. To identify oscillations, we detected ripples following the methods described in [1]. To assess temporal coordination, we employed conditional probability analysis to quantify ripple co-occurrence between regions. To characterize oscillation structure, we applied Topological Data Analysis (TDA) using time-delay embedding (dimension = 3, delay = 2). To determine directional influences, we implemented Convergent Cross Mapping (CCM) for causality assessment [2]. To evaluate the dimensionality of neural activity, we utilized Principal Component Analysis (PCA) across individual channels, regions, and brain states [3].
Results
HPC ripples consistently preceded LS ripples, with the conditional probability of LS ripples given HPC-SWR, P(LS|HPC), higher than the probability of HPC-SWR given LS ripples, P(HPC|LS), especially during NREM sleep (Fig.1E). TDA revealed distinct topological structures: LS HFOs showed state-dependent complexity differences between sleep and awake, while HPC ripples maintained similar features across states (Fig.1D). Bidirectional causality analysis showed LS-HFOs influenced HPC-SWRs more than the reverse across both states, with a stronger relationship during NREM sleep (Fig.1C). Dimensionality analysis, examining SWR events across epochs/channels and applying PCA, highlighted the variability and complexity of SWRs in HPC compared to more uniform LS HFOs (Fig.1A,F).
Discussion
Our findings reveal a complex, bidirectional relationship between HPC and LS during ripple events, with stronger coupling during NREM sleep. The higher intrinsic dimensionality of HPC activity during SWRs reflects its role in complex memory processes, while the lower-dimensional LS activity suggests a streamlined relay function [1]. These results align with prior evidence showing LS neuron activation by hippocampal SWRs [1] and highlight state-dependent coordination between HPC and LS. State-dependent coordination changes suggest that during NREM sleep, the coordination supports memory consolidation, while during wakefulness, it facilitates spatial navigation and behavior.



Figure 1. (A) PCA dimensionality of HPC/LS ripples during NREM sleep (left) and wakefulness (right). (B) Raw LFP traces of HPC-SWR (top) and LS-HFO (bottom). (C) Bidirectional CCM analysis: NREM (top) and wakefulness (bottom). (D) Topological features during NREM: H1 count (left) and Shannon entropy (right). (E) Ripple co-occurrence probability: NREM (left) and wakefulness (right). (F) Channel-wise PCA dime
Acknowledgements
N.D. is supported by NIH Grant R24MH117295.The authors wish to thank NIH for its sponsorship of DANDI archive (DANDI: Distributed Archives for Neurophysiology Data Integration), which provided the open-access data used in this study.
References
[1] Tingley, D., Buzsaki, G. (2020). Routing of hippocampal ripples to subcortical structures via the
lateral septum. Neuron, 105(1), 138-149.e5.
[2] Sugihara, G., et al (2012). Detecting
causality in complex ecosystems. Science, 338(6106), 496–500.
[3] Dehghani, N., et al (2010). Magnetoencephalography
demonstrates multiple asynchronous generators during human sleep spindles. Journal of
Neurophysiology, 104(1), 179-188.
[4] Tingley, D., Buzsaki, G. (2018). Transformation of a spatial map across the hippocampal-lateral septal
circuit. Neuron, 98(6), 1229-1242.e5.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P064: Anatomically and Functionally Constrained Bio-Inspired Recurrent Neural Networks Outperform Traditional RNN Models
Sunday July 6, 2025 17:20 - 19:20 CEST
P064 Anatomically and Functionally Constrained Bio-Inspired Recurrent Neural Networks Outperform Traditional RNN Models

Mo Shakiba1,2, Rana Rokni1,2, Mohammad Mohammadi1,2,Nima Dehghani2,3*

1Neuromatch Academy, Neuromatch, Inc.
2N3HUB Initiative, Massachusetts Institute for Technology, Cambridge, U.S.A.
3McGovern Institute for Brain Research, Massachusetts Institute for Technology, Cambridge, U.S.A.

*Email: nima.dehghani@mit.edu


Introduction
Understanding how neural circuits drive sensory processing and decision-making is a central neuroscience challenge. Traditional Recurrent Neural Networks (RNNs) capture temporal dynamics but fail to represent the structured synaptic architecture seen in biological systems [1]. Recent spatially embedded RNNs (seRNNs) add spatial constraints for better biological relevance [2], yet they do not fully exploit detailed anatomical and functional data to enhance task performance and neural alignment.
Methods
We introduce a bio-inspired RNN that integrates detailed anatomical connectivity and two-photon calcium imaging data from the MICrONS dataset (https://www.microns-explorer.org/cortical-mm3), which offers nanometer-scale reconstructions and functional recordings from mouse visual cortex. Using neuronal positions, synaptic connections, functional correlations, and Spike Time Tiling Coefficients (STTC) [3]—a robust metric that eliminates firing rate biases—we constrain our model with biologically informed weight initialization, communicability calculations, and a regularizer that penalizes long-distance connections while boosting communicability to promote realistic network properties.
Results

Trained on three distinct decision-making tasks—a 1-step inference task, a Go/No-Go task, and a perceptual decision-making task— our bio-inspired RNN demonstrated significant performance improvements over baseline models across 30 simulations per model (900 total simulations across all model variants). Variants combined W* (biologically initialized weights) or W (standard initialization), D* (actual neuron distances) or D (random distances), and C (communicability calculation). Specifically, the anatomically and functionally constrained model (W*D*C) achieved the highest average accuracy across all tasks: 89.4% on the 1-step inference task, 96.9% on the Go/No-Go task, and 86.7% on the perceptual decision-making task.
Moreover, the biologically constrained model demonstrated superior performance across other evaluation metrics, including validation accuracy, training and validation loss, and network properties such as modularity and small-worldness. Specifically, the average modularity of the W*D*C and WD*C models was highest across all tasks, with values of 0.583 (1-Step Inference), 0.558 (Go/NoGo), and 0.594 (Perceptual Decision Making). Similarly, the average small-worldness was also the highest across two tasks, with values of 3.513 (Go/NoGo), and 4.325 (Perceptual Decision Making) (Fig. 1c-e)
Discussion

Our findings demonstrate that incorporating biological constraints into RNNs significantly boosts both task performance and the emergence of realistic network properties, mirroring actual neural architectures. Future work should extend this approach to visual processing tasks, explore other architectures such as LSTMs and GNNs, and integrate additional biological constraints.




Figure 1. (a) Weight initialization matrix (top left) from MICrONS data, combining functional correlation (bottom left) and STTC (bottom right) with log-normal noise. Anatomical distance matrix (top right) shows neuron positioning. (b) Top 10% models (900 simulations): Effects of λ and W on accuracy, loss, modularity, and small-worldness. (c-e) Model variants task performance shows WDC outperforming RNNs.
Acknowledgements
N.D. is supported by NIH Grant R24MH117295. The authors thank NIH for sponsoring DANDI archive, which provided the open-access data used in this study. M.S., R.R., and M.M. thank Neuromatch Academy for its support and resources for young scholars and this study. They also thank the DataJoint team for their help and guidance.
References
● Perich, M. G., & Rajan, K. (2020). Rethinking brain-wide interactions through multi-region ‘network of networks’ models. Current Opinion in Neurobiology, 65, 146–151.
● Achterberg, J., et al. (2023). Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence, 5(12), 1369–1381.
● Cutts, C. S., & Eglen, S. J. (2014). Detecting Pairwise Correlations in Spike Trains: An Objective Comparison of Methods and Application to the Study of Retinal Waves. The Journal of Neuroscience, 34(43), 14288–14303.




Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P065: Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics
Sunday July 6, 2025 17:20 - 19:20 CEST
P065 Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics

Michael Deistler*1,2, Kyra L. Kadhim2,3, Matthijs Pals1,2, Jonas Beck2,3, Ziwei Huang2,3, Manuel Gloeckler1,2, Janne K. Lappalainen1,2, Cornelius Schröder1,2, Philipp Berens2,3, Pedro J. Gonçalves1,2,4,5, Jakob H. Macke*1,2,6

1Machine Learning in Science, University of Tübingen, Germany
2Tübingen AI Center, Tübingen, Germany
3Hertie Institute for AI in Brain Health, University of Tübingen, Tübingen, Germany
4VIB-Neuroelectronics Research Flanders (NERF)
5imec, Belgium
6Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany

*Email:michael.deistler@uni-tuebingen.de, jakob.macke@uni-tuebingen.de
Introduction

Biophysical neuron models provide mechanistic insight underlying empirically observed phenomena. However, optimizing the parameters of biophysical simulations is notoriously difficult, preventing the fitting of these models to physiologically meaningful tasks or datasets. Indeed, current fitting methods for biophysical models are typically limited to a few dozen parameters [1]. At the same time, backpropagation of error (backprop) has enabled deep neural networks to scale to millions of parameters and large datasets. Unfortunately, no current toolbox for biophysical simulation can perform backprop [2], limiting any study of whether backprop could also be used to construct and train large-scale biophysical neuron models.


Methods
We built a new simulation toolbox, Jaxley, which overcomes previous limitations in constructing and fitting biophysical models. Jaxley implements numerical solvers required for biophysical simulations in the machine learning library JAX. Thanks to this, Jaxley can simulate biophysical neuron models and it can compute the gradient of such simulations with backpropagation of error (Fig. 1a). This makes it possible to optimize thousands of parameters of biophysical models with gradient descent. In addition, Jaxley can parallelize simulations on GPUs, which speeds up simulation by at least two orders of magnitude (Fig. 1b).


Results
We applied Jaxley to a range of datasets and models. First, we applied Jaxley to a series of single neuron tasks and found that it outperforms gradient-free optimization methods (Fig. 1c). Next, we built a simplified biophysical model of the retina (Fig. 1d). We optimized synaptic and channel conductances on dendritic calcium recordings and found that the trained model exhibits compartmentalized responses (matching experimental recordings [3]). Third, we built a recurrent neural network model with biophysically-detailed neurons and trained this network on working memory tasks. Finally, we trained a network of morphologically detailed neurons to solve MNIST with 100k biophysical parameters (Fig. 1e).


Discussion
Optimizing parameters of biophysically detailed models is challenging, and previous (gradient-free) methods have been limited to a few dozen parameters. We developed Jaxley, which overcomes these limitations. Jaxley implements numerical solvers required for biophysical simulations [4], it can easily parallelize simulations on GPUs, and it can perform backprop. Together, these features make it possible to construct and optimize large neural systems with thousands of parameters. We designed Jaxley to be easy to use and we provide extensive documentation, which will make it easy for the community to adopt the toolbox. Jaxley bridges systems neuroscience and biophysics and will enable new insights and opportunities for multiscale neuroscience.





Figure 1. (a) Jaxley can compute gradients with backprop. (b) Jaxley is as accurate as the NEURON simulator and can achieve speed-ups with GPU parallelization. (c) Jaxley can identify single-neuron models, sometimes much more efficient than a genetic algorithm. (d) Biophysical model of mouse retina predicts dendritic calcium response. (e) Biophysical network solves MNIST computer vision task.
Acknowledgements
This work was supported by the German Research Foundation (DFG) through Germany’s Excellence Strategy (EXC 2064 – PN 390727645) and the CRC 1233 "Robust Vision", the German Federal Ministry of Education and Research (FKZ: 01IS18039A), the 'Certification and Foundations of Safe Machine Learning Systems in Healthcare' project, and the European Union (ref. 101089288, ref. 101039115).

References
[1] Van Geit, W., De Schutter, E., & Achard, P. (2008). Automated neuron model optimization techniques: a review.Biological cybernetics,99, 241-251.
[2]Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment.Neural computation,9(6), 1179-1209
[3]Ran, Y., Huang, Z., Baden, T., Schubert, T., Baayen, H., Berens, P., ... & Euler, T. (2020). Type-specific dendritic integration in mouse retinal ganglion cells.Nature communications,11(1), 2101.
[4]Hines, M. (1984). Efficient computation of branched nerve equations.International journal of bio-medical computing,15(1), 69-76.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P066: In-silico study on the dynamical and topological relationships between three-dimensional cultures and relative slices
Sunday July 6, 2025 17:20 - 19:20 CEST
P066 In-silico study on the dynamical and topological relationships between three-dimensional cultures and relative slices

Leonardo Della Mea1*, Angelo Piga2,3, Jordi Soriano3

1DIBRIS, University of Genoa, Genoa, Italy
2Department of Economics and Managements, University of Pisa, Pisa, Italy
3Institute of Complex Systems, University of Barcelona, Barcelona, Spain

*Email: leonardo.dellamea@edu.unige.it

Introduction

The in-vitro three-dimensional neuronal culture represents a pioneering technological advancement in exploring brain function and dysfunction in a more realistic environment[1], [2]. The recording of the entire network activity remains a challenge, since researches resort to methods developed for two-dimensional cultures, such as multi-electrode arrays or calcium fluorescence imaging. The question of whether the read-out layer, through which the network is recorded, reliably captures topological and dynamical properties, quickly arises. In the this study we utilized in-silico modelling of developing 3D neuronal networks to assess the reliability of a single layer of the culture in capturing dynamical and topological features of the entire parent network.

Methods
The networks were constructed through the random placement of neurons within a rectangular prism. The cell’s dendritic and axonal domains were constructed upon a main trunk, consisting in concatenated segments, and a group of arborizations, depicted by spherical regions. The overlap of different cells’ dendritic and axonal arbours results in synaptogenesis. The network is then simulated as a pulsed coupled neuronal network embedding the Izhikevich model[3]. The bottom layer of neurons were drawn out to emulate the MEA recordings. Its dynamical properties -focused on the features of the network burst (NB)– and topological traits –based on small-worldness (SW)[4]and modularity (Q)[5]- were compared to the one of the entire parent cultures.

Results and discussion
From a dynamical and topological perspective, statistically significant differences were observed for all the parameters measured.For the network’s slice, the mean NB sizes and dynamical variability are regularly overestimated, whereas the NB duration is underestimated. Due to slicing, a variable fraction of the neurons in the layer are exposed to the propagating front of the burst, thus, the dynamical differences observed in slices may be due the fact that NB events are very unlikely to systematically engage equal fractions of the sub-network, justifying the higher dynamical variability. In addition, the reduced size of the network makes the slices liable to wrongly capture the mean event sizes and duration; indeed, both measures depend on the network size. Modularity exhibited a monotonic decline in both 3D and slice systems, although it was marginally overestimated in the slice. The 3D network shows a bell-shaped trendof SW valuesacross the maturation, peaking in the middle of developmental phase. In contrast, the slice’s values differed, consistently under-estimating it.Sampling the edges from a network whose architecture is grounded on distance-dependent probability of connection, results in sub-networks where this feature is exacerbated. Consequently, in slices, communities are more starkly outlined and in turn Q increases and SW decreases -due to the reduction of shortcuts.








Acknowledgements
The author wish to thank Prof. Jordi Soriano and Angelo Piga , for their kind advice on the experimental procedure and useful discussion. The author declare no use of Artificial Intelligence in this study.
References
[1] https://doi: 10.1016/j.isci.2020.101434.
[2] https://doi: 10.1002/term.2508.
[3] https://doi: 10.1109/TNN.2003.820440.
[4] https://doi: 10.1016/j.neuroimage.2009.10.003.

[5]https://doi: 10.1103/PhysRevE.70.066111.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P067: Optimization-Based Insights into Network Configurations of the Songbird Premotor Cortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P067 Optimization-Based Insights into Network Configurations of the Songbird Premotor Cortex

Fatima M. Dia*1, Maher Nouiehed2, Arij Daou1

1Department of Biomedical Engineering, American University of Beirut, Beirut, Lebanon
2Department of Industrial Engineering and Management, American University of Beirut, Beirut, Lebanon

*Email:fmd14@mail.aub.edu

Introduction
Neural circuits in the brain maintain a delicate equilibrium between excitation and inhibition, yet how this balance operates remains unclear[1]. Moreover, neural circuits often exhibit sequences of activity that rely on excitation and inhibition, but the contribution of local networks to their generation is not well understood. This study investigates neural sequence generation within the High Vocal Center (HVC) of the adult zebra finch forebrain. The HVC plays a critical role in the execution of temporally precise courtship songs and is comprised of three neural populations with distinct electrophysiological responses: glutamatergic basal ganglia–projecting (HVCX) and forebrain-projecting (HVCRA) cortical neurons, and GABAergic interneurons (HVCINT)[2]. While the connections between these neuronal classes are known[1,3], how they orchestrate this temporally precise neural sequence remains largely unknown.
Methods
To address this question, we applied optimization techniques and mathematical modeling to describe the relationships among HVCRA, HVCX, and HVCINT neurons and their bursting patterns. Our approach focused on uncovering the underlying cytoarchitecture of the HVC neural network by utilizing biologically realistic constraints. These constraints included the pharmacological nature of synaptic connections, anatomical and intrinsic properties, neuronal population ratios, precise burst timing, and spiking frequency during song motifs[2,4]. The study incorporated both closed and open network configurations to assess their ability to reproduce observed bursting sequences.
Results
Our computational framework successfully predicted the minimalistic synaptic connections required to replicate the observed bursting patterns of the HVC network. The model identified specific network topologies that satisfied experimental constraints while maintaining functional output. Additionally, our findings indicated that certain network configurations necessitate additional nodes to form a fully connected network capable of sustaining stable sequential bursting. These predictions align with previous experimental data and provide novel insights into potential connectivity motifs that could underlie the temporal precision of song production.
Discussion
This study bridges experimental data with computational predictions, offering a framework for understanding how local excitatory and inhibitory interactions within HVC generate precise neural sequences. By identifying minimal network configurations, our model provides a hypothesis regarding the synaptic architecture required for sequence generation. Future work should incorporate in vivo validation of the predicted connectivity patterns using electrophysiological and optogenetic approaches. Our findings contribute to a broader understanding of how premotor circuits coordinate motor behaviors and may have implications for studying sequence generation in other brain regions beyond the songbird HVC.



Acknowledgements
This work was supported by the University Research Board (URB) and the Medical Practice Plan (MPP) grants at the American University of Beirut.
References
● https://doi.org/10.1152/jn.00162.2013
● https://doi.org/10.1038/nature00974
● https://doi.org/10.1038/nature09514
● https://doi.org/10.1152/jn.00952.2006


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P068: Modelling the dynamics of in vitro cultured neural networks by using biophysical neuronal models
Sunday July 6, 2025 17:20 - 19:20 CEST
P068 Modelling the dynamics of in vitro cultured neural networks by using biophysical neuronal models

Marco Fabiani1, Ludovico Iannello3, Fabrizio Tonelli4, Eleonora Crocco4,
Federico Cremisi4, Lucio Calcagnile2, Riccardo Mannella1, Angelo Di Garbo1,2
1Department of Physics, University of Pisa
2Institute of Biophysics - IBF, CNR of Pisa
3Institute of Information Science and Technologies ”Alessandro Faedo” - ISTI,
CNR of Pisa
4Scuola Normale Superiore - SNS, Pisa
Email: m.fabiani5@studenti.unipi.it, angelo.digarbo@ibf.cnr.it
IntroductionIn this contribution we study the dynamical behaviours arising in a biophys-ical inspired neuronal network of excitatory and inhibitory neurons. The setup of the corresponding model was done by using the electro-physiological data recorded on cultured neuronal networks. The recordingsof the local field potential generated by the neurons was carried outby using multielectrode array (MEA) apparatus [2]. In particular, weinvestigated the dynamics emerging in a cultured population of (Mapk/erkinhibition and BMP inhibition, MiBi) neurons of the entorhinal cortex [1].
MethodsThe MEA recordings were obtained from a grid of 64x 64 electrodes coveringan area of 3.8 mm x 3.8 mm of the neuronal culture. The correspondinglocal field potentials were acquired with a sampling frequency of 20 kHz.The spiking times of the cultured neurons were obtained by applying specificalgorithms to the local field potential signals. Then, an artificialbiophysical inspired neural network was built by employing Hodgkin-Huxley-type models for the single neuron.Finally,the parameters describing the computational neural network were chosen byrequiring that the simulation results were qualitatively in agreement with thecorresponding experimental data.
ResultsAccording to the results described in [2, 3] we found that the MiBi culturedneuoronal network is capable of generating bursting activity. Moreoverthe analysis of these data show that the bursting activity is triggered bysome points of the cultured network (center of activity). In addition, thepropagation on the neural culture was characterized by the center of activitytrajectories (CAT). Furthermore, these cultures exhibit neuronalavalanches with power decay. We have shown that the computationalmodel is capable of reproducing the bursting dynamics observed in vivo cul-tured neural network by choosing suitable parameter values in an all-to-allcoupled network.By setting up a more detailed network model, obtained by modifying theconnectivity matrix and the density of neurons, we proved that such a neu-ronal network is capable of reproducing many of the experimental data and,qualitatively, their specific features.
DiscussionAlthough the mathematical model has some intrinsic limitations, the corre-sponding numerical results helped us to shed light on some basic mechanismsresponsible for the generation of bursting in the network and this could beused to infer that such processes should be present also in the MiBi culturedneuronal network. It would be interesting to check if improving the qualityof the neuronal model will be sufficient to reproduce others experimental fea-tures that are not captured by the adopted model. This include, for instance,to use more realistic single neuron model, synaptic connectivity and synapticplasticity.



Acknowledgements
The research was in part supported by the Matteo Caleo Foundation, by Scuola Normale
Superiore (FC), by the PRIN AICult grant #2022M95RC7 from the Italian Ministry of
University and Research (MUR) (FC) and by the Tuscany Health Ecosystem - THE grant
from MUR (FC, GA, ADG).
References
[1]Tonelli F. et al. “Dual inhibition of MAPK/ERK and BMP signaling
induces entorhinal-like identity in mouse ESC-derived pallial progeni-
tors.” In: Stem Cell (2025). doi: 10.1016/j.stemcr.2024.12.002.
[2] Ludovico Iannello et al. “Analysis of MEA recordings in cultured neural
networks”. In: (2024), pp. 1–5. doi: 10.1109/COMPENG60905.2024.10741515.
[3] Ludovico Iannello et al. “Criticality in neural cultures: Insights into
memory and connectivity in entorhinal-hippocampal networks”. In:
Chaos, Solitons and Fractals 194 (2025), p. 116184.
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P069: Statistics of spiking neural networks based on counting processes
Sunday July 6, 2025 17:20 - 19:20 CEST
P069 Statistics of spiking neural networks based on counting processes

Anne-Marie Greggs1*, Alexander Dimitrov1

1Department of Mathematics and Statistics, Washington State University, Vancouver, WA

*Email: anne-marie.greggs@wsu.edu
Introduction

Estimating neuronal network activity as point processes is challenging due to the singular nature of events and high signal dimensionality[1]. This project analyzes spiking neural networks (SNNs) using counting process statistics, which are equivalent integral representations of point processes[2]. A small SNN of Leaky Integrate-and-Fire (LIF) neurons is simulated, and spiking events are counted as a vector counting process N(t). The Poisson counting process has known dynamic statistics over time: both mean(t) and variance(t) are proportional to time (= r_i*t for each independent source with rate r_i). By standardizing the data, mean dynamics and heteroscedasticity can be removed, allowing comparison to a baseline Poisson counting process.
Methods
Using Brian2[3], an SNN with LIF neurons and Poisson inputs is simulated. Independent and correlated Poisson processes are modeled, generating spike trains for analysis. The counting process, a stochastic process producing the number of events within a time period, is analyzed using the vector counting process. Mean and covariance of spiking events are estimated for both SNN and Poisson processes, facilitating comparison of statistical properties after standardization by subtracting the mean and scaling by the standard deviation to account for temporal dependencies.
Results
Fig 1 shows the simulated spiking dynamics of two neurons over time. The standardized counts indicate variability aligned with Poisson statistical properties. While mean counts show a consistent trend, variance reflects the stochastic nature of neural activity. The centered plot's standard deviation equals the square root of rate and time. The standardized plot’s standard deviation equals 1, serving as a comparison template, starting at 200 milliseconds to avoid biases when rescaling initial small counts.
The covariance matrix quantifies relationships between neurons at certain time and activity levels. Comparing the SNN to modeled Poisson processes reveals notable differences in covariance structures, with the SNN demonstrating greater inter-unit correlation.

Discussion
This study establishes a framework for analyzing the statistical properties of neural network activity, enabling researchers to gain insights into the dynamics of spiking networks. Understanding these aspects is crucial for examining how neural networks respond to stimuli and adapt to changing environments.
The findings highlight the importance of inter-unit dependencies in neural data, with the proposed estimators effectively capturing these dynamics. Future research should broaden parameter exploration and apply the estimators to complex models and real-world data, including comparisons between inhomogeneous Poisson processes with time-varying rates, temporal dependencies, and non-Poisson processes of SNNs.




Figure 1. Two 3D plots compare the activity of Neuron 1 and Neuron 2 over time. The left plot shows counts centered based on the theoretical expectations, while the right plot shows counts standardized by both the theoretical expectations and theoretical standard deviations, with multiple lines representing each of 100 samples.
Acknowledgements
N/A
References
● Brown, E. N., Kass, R. E., & Mitra, P. P. (2004). Multiple neural spike train data analysis: state-of-the-art and future challenges. Nature Neuroscience, 7(5), 456-461. doi: 10.1038/nn1254
● Cox, D. R., & Isham, V. (1980). Point processes. Chapman and Hall.
● Stimberg M, Brette R, Goodman DFM (2019). Brian 2, an intuitive and efficient neural simulator. eLife 8:e47314. doi: 10.7554/eLife.47314



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P070: Neurospheroids as building blocks for 3D brain-like structures
Sunday July 6, 2025 17:20 - 19:20 CEST
P070 Neurospheroids as building blocks for 3D brain-like structures

Ilaria Donati della Lunga*,1, Francesca Callegari1, Fabio Poggio1, Letizia Cerutti1, 2, Mattia Pesce2, Giovanni Lobello1, Alessandro Simi3, Mariateresa Tedesco1, Paolo Massobrio1,4, Martina Brofiga1,2,5
 
1Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genova, Genova, Italy
2Neurofacility, Istituto Italiano di Tecnologia (IIT), Genova, Italy
3 Central RNA Laboratory, Istituto Italiano di Tecnologia (IIT), Genova, Italy
4National Institute for Nuclear Physics (INFN), Genova, Italy
5ScreenNeuroPharm, Sanremo, Italy


* Email:ilaria.donatidellalunga@edu.unige.it
Introduction.Conventionalin vitroneuronal networkshave provided insights into brain function and disease mechanisms [1] but often neglect importantin vivoproperties such as three-dimensionality (3D) and heterogeneity, i.e. the coexistence of different neuronal types. To address these limitations, we aimed to develop 3D cortical (C) and hippocampal (H) cell aggregates known as neurospheroids (NSs): their coupling allowed us to generate homogeneous (CC, HH) or heterogeneous (CH) assembloids (ASs). This study aims to prove that these models enhance the reproducibility, viability, and biological significance of neuronal cultures, while exhibitingin vivo-like electrophysiological patterns, characterized by brain waves.



Methods.We employed a multi-modal approach: structural and mechanical properties were assessed via immunostaining and atomic force microscopy, functional activity was evaluated by calcium imaging and electrophysiological recordings with Micro-Electrode Arrays. To detect neuronal activity and synchronization, fluorescence traces were analyzed using Schmitt trigger and SPIKE-synchronization algorithms. From the electrophysiological activity, we identified the typical observedin vivobrain waves. Spectral analysis was performed using the wavelet transform to assess oscillatory patterns, while the functional E/I ratio metrics [2] was used to validate the physiological relevance of the models.

Results.Morphological analysis revealed a faster geometric expansion and higher cell proliferation in H than C. Stiffness values matchedin vivoconditions [3], and immunostaining confirmed physiological composition and organization [4]. We developed homogeneous (CC, HH) and heterogeneous (CH) ASs coupling pairs of NSs, ensuring physical connections while preserving structural segregation.Calcium dynamic revealed functional intra- and inter-module communication in ASs. Moreover, spectral analysis showed the generation of typical brain waves in our 3D models,with CH displaying different dynamics at DIV 18, marking a transition phase. The excitation/inhibition ratio matched physiological conditions [2].

Discussion.Our findings showed that the developed NSs and ASs enhance physiological relevance by replicating key aspects of brain organization and function. The integration of cortical and hippocampal regions within ASs enables the study of modular and heterogeneous network dynamics. Functional analyses confirm the emergence of complex oscillatory patterns, reflectingin vivo-like network behavior. The ability of ASs to maintain structural segregation while ensuring functional connection makes them a valuable tool for investigating fundamental neurobiological mechanisms, modelling neurodegenerative diseases and testing therapeutic interventions.





Acknowledgements
This work was supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS(PE0000006) - A Multiscale integrated approach to the study of nervous system in health and disease (DN. 1553 11.10.2022).
References
1.https://doi.org/10.1152/jn.00575.2016
2.https://doi.org/10.1038/s41598-020-65500-4
3.https://doi.org/10.1007/s11831-019-09352-w

4.https://doi.org/10.1016/j.cmet.2011.08.016
Speakers
avatar for Paolo Massobrio

Paolo Massobrio

Associate Professor, Univeristy of Genova
My research activities are in the field of the neuroengineering and computational neuroscience, including both experimental and theoretical aspects. Currently, I am coordinating a research group (1 assistant professor, 2 post-docs, and 5 PhD students) working on the interplay between... Read More →
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P071: The Role of Descending Inferior Colliculus Projections to the Cochlear Nucleus in the Hyperactivity Underlying Tinnitus
Sunday July 6, 2025 17:20 - 19:20 CEST
P071 The Role of Descending Inferior Colliculus Projections to the Cochlear Nucleus in the Hyperactivity Underlying Tinnitus

Katherine Doxey*1, Timothy Balmer2, Sharon Crook1



1School of Mathematical and Statistical Sciences, Arizona State University, Tempe, United States
2School of Life Sciences, Arizona State University, Tempe, United States


*Email: kedoxey@asu.edu


Introduction

Tinnitus is the perception of a sound without the presence of auditory stimuli. Over 2.5 million veterans are currently receiving disability benefits for a tinnitus diagnosis [1] and the likelihood of veterans screening positive for posttraumatic stress disorder (PTSD) increases with severity of tinnitus [2]. We focus on tinnitus from high-frequency hearing loss that is associated with exposure to loud noise and is perpetuated by neuronal hyperactivity in the dorsal cochlear nucleus (DCN). In this study, we test the hypothesis that descending projections from the inferior colliculus (IC) cause hyperexcitability of frequencies that are no longer encoded by the bottom-up sensory signals after damage to the cochlea [3].

Methods
We implement a network model of central auditory processing that consists of 200 fusiform cells that receive tonotopic excitatory input from 200 spiral ganglion neurons (SGN) and lateral inhibitory input from 200 interneurons. We implement the descending IC projections with 200 cells that provide excitatory input to the fusiform cells. Auditory input is modeled as a depolarization of SGN neurons and hearing loss is modeled as reduced depolarization of SGN neurons at the highest frequency range. Each cell is an Izhikevich model neuron with regular spiking dynamics [4]. We characterize the dynamics of the network by applying a pure tone stimulus and simulating either normal hearing or hearing loss.
Results
Without descending IC projections, we confirm that loss of auditory nerve input at the high frequency range produces aberrant excitation at adjacent frequencies of the tonotopic map, i.e. tinnitus. With descending IC projections, we demonstrate that the signal to noise ratio increases as well as the hyperexcitability of the adjacent frequencies.
Discussion
A significant barrier to the treatment of tinnitus is the lack of knowledge on the source of the hyperexcitability; understanding the interactions between the DCN and IC in the central auditory pathway is essential to the development of physiology-based treatment to target the appropriate circuit elements. Our model shows that the descending IC projections result in hyperexcitability of high frequencies that are not encoded after hearing loss. To better understand these mechanisms, future work will involve extending the DCN model network to include narrowband and wideband inhibitors that contribute to processing pure tone, broadband, and notch noise stimuli.




Acknowledgements
This research is supported by DARPA YFA.
References
1.Annual Benefits Report 2021- Veterans Benefits Administration Reports. https://www.benefits.va.gov/REPORTS/abr/
2.Prewitt, A., Harker, G., Gilbert, T. A., et al. (2021). Mental Health Symptoms Among Veteran VA Users by Tinnitus Severity:A Population-based Survey. Military Medicine, 186(Suppl 1), 167–175.
3.Gerken, G. M. (1996). Central tinnitus and lateral inhibition: An auditory brainstem model. Hearing Research, 97(1), 75–83.
4.Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569–1572.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P072: Partial models for learning in nonstationary environments
Sunday July 6, 2025 17:20 - 19:20 CEST
P072 Partial models for learning in nonstationary environments

Christopher R. Dunne*1and Dhruva V. Raman1

1Department of Informatics, University of Sussex, Falmer, United Kingdom

*Email: C.Dunne@sussex.ac.uk


Introduction
The computational goal of learning is often presented as optimality on a given task, subject to constraints. However, the notion of optimality makes strong assumptions: by definition, changes to the task structure will render an optimal agent suboptimal. This is problematic in ethological settings where an agent lacks the time or data to accurately model a task's latent states.
We present a novel design principle (Hetlearn) for learning in nonstationary environments, inspired by the Drosophila mushroom body. It makes specific predictions on what should be inferred from the environment, as compared to Bayesian inference. We show that Hetlearn outperforms an optimal Bayesian agent and better matches human and macaque behavioural data.
Methods
We consider the task of learning from reward prediction errors (RPEs) in which an animal updates a valence based on RPEs (Fig. 1 E). Critically, the degree to which the RPE changes the valence is modulated by a learning rate. To set an adaptive learning rate, Hetlearn employs parallel sublearners with heterogeneous fixed assumptions about the environment (varied fixed learning rates). Ensemble predictions employ a weighted vote with weights dependent on recent sublearner performance. This allows rapid adaptation to unpredictable environmental changes without explicitly estimating complex latent variables. We compare Hetlearn against an advanced Bayesian agent [1] and to behavioural data from humans and macaques [2, 3, 4].
Results
Hetlearn outcompetes a Bayesian agent [1] on reward learning in nonstationary environments (Fig. 1 A-D). It is also algorithmically simpler; it builds a partial generative model and does not track complex environmental statistics. Nonetheless, it aligns with behavioural data from humans and macaques as well as with previous models [2, 4] (Fig. 1 F-G). This is notable because qualitatively different models (Bayes optimal vs suboptimal) previously provided the best respective fit to these two datasets [2, 3]. As such, Hetlearn offers a unified learning principle for seemingly disparate strategies. Finally, Hetlearn is robust to model misspecification; its parameters can vary by an order of magnitude without performance decline.
Discussion
Hetlearn outcompetes [1] in part because it exploits a bottleneck in the learning process. An optimal learner needs to infer multiple quantities that impact a single bounded parameter: the learning rate. Conversely, Hetlearn tracks the recent performance of parallel learners with heterogeneous learning rates. In effect, it trades optimal performance in a stationary environment for generalisability across environments. This results in superior performance in unpredictably changing environments or those with limited time or data, which are the precise conditions in which animals outperform artificial neural networks. Crucially, Hetlearn generates new, testable predictions on what should be inferred from the environment in these regimes.



Figure 1. (A) Environments with varying statistics. (B, C) Learning rate tracking by Hetlearn and Bayesian agent [1]. (D) Hetlearn has lower mean squared error (MSE) across environments. (E) Reward prediction error (RPE) task. Bayesian agent explicitly tracks complex latent states (volatility and stochasticity) that Hetlearn tracks only implicitly. (F, G) Hetlearn matches human [2] and macaque [3, 4] data.
Acknowledgements
This research was supported by the Leverhulme Doctoral Scholarship programme be.AI – Biomimetic Embodied Artificial Intelligence at the University of Sussex.
References
[1]https://doi.org/10.1038/s41467-021-26731-9
[2]https://doi.org/10.1038/nn1954
[3]https://doi.org/10.1038/nn.3918
[4]https://doi.org/10.1016/j.neuron.2017.03.044
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P073: Simulations using realistic 3D reconstructions of astrocyte endfeet reveal how cell shape alters diffusion in Alzheimer’s disease
Sunday July 6, 2025 17:20 - 19:20 CEST
P073 Simulations using realistic 3D reconstructions of astrocyte endfeet reveal how cell shape alters diffusion in Alzheimer’s disease

Florian Dupeuble*1, Chris Salmon2,5, Hugues Berry1, Keith Murai2, Kaleem Siddiqi3,4,Alexandra L Schober2, J. Benjamin Kacerovsky2,Tabish A. Syed2,5,Rachel Fagen2, Tatiana Tibuleac2Amy Zhou2,Audrey Denizot1

1AIStroSight, INRIA, Université Claude Bernard Lyon 1, Villeurbanne, France
2Research Institute of the McGill University Health Centre, McGill University, Montréal, Canada
3School of Computer Science, McGill University, Montréal, Canada
4MILA - Québec AI Institute, Montreal, Canada
5Centre for Intelligent Machines, School of Computer Science, McGill University, Montreal, Canada





*Email: florian.dupeuble@inria.fr
Introduction

Astrocytes are glial cells involved in numerous brain functions, such as blood flow regulation, toxic waste clearance, or nutrient uptake [1]. They display specialized protrusions, called endfeet, that cover the majority of blood vessels and are suspected to mediate neurovascular coupling.
In Alzheimer’s Disease (AD), astrocytes undergo morphological changes [2]. However, whether endfoot morphology is altered and the functional implications of such ultrastructural changes remain poorly understood to date.

Methods
To study the impact of endfoot shape on astrocyte function, we developed a model of diffusion within high-resolution 3D reconstructions of astrocyte endfeet from WT and AD mice, derived from electron microscopy. 3D manifold tetrahedral endfoot meshes were obtained using Blender and TetWild software. Simulations of calcium diffusion were performed using FEniCS, a finite element methods Python library.
Results
We observe strong differences between the diffusional properties of AD and WT endfeet. While WT endfeet rapidly display a homogeneous calcium concentration, calcium in AD endfeet appears highly compartmentalized. Simulations accounting for the complex ER morphology suggest that they contribute to increased calcium concentration heterogeneity in endfeet, in particular in AD.
Discussion
Our preliminary results suggest that the morphological changes undergone by endfeet in AD impact local diffusion, leading to calcium compartmentalization, which could strongly affect local calcium signaling. Future work will be critical to decipher the functional link between endfoot shape, local calcium signaling, and the neurovascular uncoupling observed in AD [3]. This work provides new insights into the basic mechanisms governing endfoot dysfunction in AD.




Acknowledgements

References
1.https://doi.org/10.1146/annurev-neuro-091922-031205
2.https://doi.org/10.1016/j.coph.2015.09.011
3.https://doi.org/10.1093/brain/awac174

annurev-neuro-091922-031205
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P074: Hybridizing Machine Reinforcement Learning with Neuromimetic Navigation Systems
Sunday July 6, 2025 17:20 - 19:20 CEST
P074 Hybridizing Machine Reinforcement Learning with Neuromimetic Navigation Systems

3Christopher Earl,4Moshe Tannenbaum,1Haroon Anwar,4Hananel Hazan,1,2Samuel Neymotin
1Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
2Department of Psychiatry, NYU School of Medicine, New York, NY, USA.
3Department of Computer Science, University of Massachusetts, Amherst, MA, USA.4Allen Discovery Center, Tufts University, Boston, MA, USA.
Introduction

Animal brains are capable of remembering explored locations and complex pathways to efficiently reach goals. Many neuroscience studies have investigated the cellular and circuit basis of spatial representations in the Hippocampal (HPC) and Entorhinal Cortex (EC); however, mechanisms that enable this complex navigation remain a mystery. In computer sciences, Q-Learning (QL) is a Reinforcement Learning (RL) algorithm that facilitates associations between contexts, actions, and long-term consequences. In this study, we develop a bio-inspired neuronal network model which integrates cell types from the mammalian EC and HPC hybridized with QL to simulate how the brain could learn to navigate new environments.
Methods
We used the BindsNET platform [1] to model Grid Cells (GC), Place Cells (PC), and motor control cells to drive agent actions. Our model is a Spiking Neuronal Network (SNN) with leaky integrate-and-fire (LIF) neurons, and organized to mimic GCs and PCs found in the EC and HPC (Fig 1). Reward-Modulated Spike Time Dependent Plasticity (RM-STDP) [2,3] applied to synapses activating motor control cells facilitates learning. The RM-STDP mechanism receives rewards from a Q-Table, helping the agent associate actions with long-term consequences. The agent is tasked with navigating a maze and learning a path to a goal (Fig 2). Feedback is given only at the goal, requiring the agent to associate actions with long-term outcomes to solve the maze.
Results
Trained models successfully and consistently navigated randomly generated mazes. GC populations encoded distinct physical locations into unique neural encodings, enabling the agent to distinguish between them. This lets the agent remember previously visited areas and associate them with actions. Combined with QL, long-term consequences of actions could also be retained, allowing the model to learn long paths to the goal with sparse environmental feedback.

Certain cells in the reservoir population fired only when the agent was in a specific location of the maze, suggesting these cells naturally developed PC-like characteristics. When GC’s were re-oriented in a new maze, the PC’s would remap, similar to behavior observed in biology [4].
Discussion
We designed an SNN model that mimics the mammalian brain’s spatial representation system, and integrated it with QL to solve a maze task. Our model forms the basis of a functional navigation system by effectively associating actions with long-term consequences. While the QL component is not biologically-plausible, we believe higher order brain areas could provide similar computational capabilities. In future work, we aim to implement QL as a SNN. Results also suggest an explanation for the emergence of PC in the HPC due to upstream GC activity in the EC. Moreover, GC spatial representations are likely generalizable outside of a maze. Future research could utilize our model’s GC-PC architecture to navigate more complex environments.



Figure 1. Fig 1: Diagram of bio-inspired SNN, and its relationship to QL. Bio-inspired SNN generates an action, feedback from the environment is fed into a datastructure called a ‘Q-Table’, and updates from this table modulate RM-STDP synapses in the SNN. Fig 2: Example 5x5 maze environment. Green dot represents start, blue dot the goal, yellow dot the agent position, and red dots the optimal path.
Acknowledgements
Research supported by ARL Cooperative Agreement W911NF-22-2-0139 and ARL/ORAU Fellowships
References
[1]BINDSNet: A machine learning-oriented spiking neural networks library in Python.Front Neuroinform2018 12(2018):89

[2] Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning.Front Comput Neurosci2022 16:1017284

[3] Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning.PLoS One2022 17(5):e0265808
[4]Remapping revisited: how the hippocampus represents different spaces.Nat Rev Neurosci202425(6):428-448


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P075: Sequential dynamical invariants in winnnerless competition neural networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P075 Sequential dynamical invariants in winnnerless competition neural networks

Irene Elices*1, Pablo Varona1
1 Grupo de Neurocomputación Biológica, Dept. de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, 28049, Madrid, Spain.

*Email: irene.elices@uam.es
Introduction

Generating neural sequences is fundamental for behavior and cognition, which require robustness of sequential order and flexibility in its time intervals to adapt effectively. Studying cyclic sequences provides key insights into constraints limiting flexibility and shaping sequence intervals. Previously, we identified such constraints as robust cycle-by-cycle relationships between specific time intervals, i.e., dynamical invariants, in bursting central pattern generators [1]. However, their presence in computational models remains largely unexplored. Here, we examine dynamical invariants in a winnerless competition network model that generates chaotic activity while sustaining robust sequences among active neurons.



Methods
We analyzed sequence interval relationships in a Lotka-Volterra neural network that displays chaotic heteroclinic dynamics from its asymmetric connectivity [2,3]. Variables in these generalized Lotka-Volterra differential equations represent the instantaneous spike rate of the neurons. For analysis, we selected the most active neuron as a cycle reference, detecting sequential events in other neurons using activation thresholds. Cycle-by-cycle intervals were defined as the time intervals between activation and subsequent deactivation events, including those between distinct neurons. Analysis included variability measures, correlation analysis, and PCA to uncover robust relationships between interval timings.

Results
Despite the chaotic dynamics, which can be related to exploration tasks in motor and cognitive activity [2-4], we observed robust dynamical invariants between specific time intervals that added to the activation phase locks in active neurons to provide coordination between cells. The dynamical invariants represent constraints to the variability present in the chaotic activity and can underlie an emergent control mechanism. This is the first time that sequential dynamical invariants are reported in heteroclinic dynamics.

Discussion
The presence of dynamical invariants remains largely unexplored in computational models, with only a few studies addressing simplified circuits, such as minimal CPG circuit building blocks [5]. The main challenge in studying dynamical invariants in computational models is the lack of variability in individual model neurons and in network dynamics. However, a winnerless competition network model generates chaotic spatiotemporal activation patterns, thus overcoming the mentioned variability challenge. Our work analyzes for the first time the presence of dynamical invariants among the activation intervals. Results suggest that these robust cycle-by-cycle relationships are part of the sequence coordination mechanisms of the heteroclinic dynamics.




Acknowledgements
Work funded by PID2021-122347NB-I00, PID2024-155923NB-I00, and CPP2023-010818 (MCIN/AEI and ERDF- “A way of making Europe”).
References
[1]https://doi.org/10.1038/s41598-019-44953-2
[2]https://doi.org/10.1063/1.1498155
[3]https://doi.org/10.1103/PhysRevE.71.061909
[4]https://doi.org/10.1007/s11571-023-09987-3

[5]https://doi.org/10.1016/j.neucom.2024.127378
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P076: A NEST-based framework for the parallel simulation of networks of compartmental models with customizable subcellular dynamics
Sunday July 6, 2025 17:20 - 19:20 CEST
P076 A NEST-based framework for the parallel simulation of networks of compartmental models with customizable subcellular dynamics

Leander Ewert1, Christophe Blaszyck2, Jakob Jordan5, Charl Linssen1,3, Pooja Babu1,3, Abigail Morrison1,2, Willem A.M. Wybo4

1Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
2Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
3Simulation and Data Laboratory Neuroscience, Jülich Supercomputer Centre, Institute for Advanced Simulation, Jülich Research Centre, 52425 Jülich, Germany
4Peter Grünberg Institut (PGI-15), Jülich Research Centre, 52425 Jülich, Germany
5Department of Physiology, University of Bern, Bern, Switzerland

*Email: l.ewert@fz-juelich.de

Introduction

The brain is a massively parallel computer. In the human brain, 86 billion neurons convert synaptic inputs into action potential (AP) output. Moreover, even at the subcellular level, computations proceed in a massively parallel fashion. Approximately 7’000 synapse per neuron are supported by complex signaling networks within dendritic compartments. In itself, these signaling networks can also be understood as nanoscale computers that convert synaptic input, backpropagating APs, and local voltage and concentration signals into weight dynamics that support learning and memory. It is only natural, thus, to use the parallelization and vectorization capabilities of modern supercomputers to simulate the brain in a massively parallel fashion.
Methods
The NEural Simulation Tool (NEST) [1] is the reference with regards to the massively parallel simulation of spiking network models, as it has been optimized to efficiently communicate spikes across MPI processes [2]. Moreover, these capabilities introduce little overhead for the user, as the distribution of neurons across MPI processes is taken care of by NEST itself. However, so far NEST had limited options to simulate subcellular processes as part of the network, essentially forcing users to develop custom C++ codes. We have extended the scope of the NESTML modelling language [3] to support multi-compartment models, with dendrites featuring user-specified dynamical processes (Fig 1A-C).
Results
These user-specified dynamics are compiled into efficient NEST models through a C++ code generator, in such a way that the vectorization capabilities of modern CPUs are optimally leveraged. This allows for a deeper level of parallelization, next to the network parallelization across MPI processes, allowing individual CPUs to integrate up to eight compartments in parallel and decreasing single neuron runtimes accordingly. The compartmental engine furthermore leverages the Hines algorithm [4] to achieve stable and efficient integration of the system as a whole. Together, this results in single-neuron speedups compared to the field-standard NEURON simulator [5] of up to a factor of four to five (Fig 1D).
Discussion
Thus, we enable the simulation of large-scale networks where individual neurons have user-specified dynamical processes, representing (i) voltage-dependent ion channels, (ii) synaptic receptors that may be subject to a-priori arbitrary plasticity processes, or (iii) slow processes describing molecular signaling or ion concentration dynamics. Conducting such simulations has historically been challenging, since simulators specific to this purpose were lacking. With the present work, we facilitate the creation and efficient distributed simulation of such networks, thus supporting the investigation of the role of dendritic processes in network-level computations involving learning and memory.




Figure 1. Figure 1. (A) NESTML-defined subcellular mechanisms (left) are compiled into an efficient NEST model. User-defined dendritic layouts (middle) are then embedded in NEST network simulations (right). (B) NESTML code defining dendritic calcium dynamics induces BAC firing [6] in a two-compartment model [7] (C). (D) Speedup of NEST compared to NEURON (bottom) for two dendritic layouts (left vs right).
Acknowledgements
The authors gratefully acknowledge funding from the HelmHoltz POF IV, Program 2 Topic 3.

References
[1] 10.4249/scholarpedia.1430
[2] 10.3389/fninf.2014.00078
[3] 10.3389/fninf.2018.00050
[4] 10.1017/CBO9780511541612
[5] 10.1016/0020-7101(84)90008-4
[6] 10.1038/18686
[7] 10.48550/arXiv.2311.0607



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P077: An Intrinsic Dimension Estimator for Neural Manifolds
Sunday July 6, 2025 17:20 - 19:20 CEST
P077 An Intrinsic Dimension Estimator for Neural Manifolds

Jacopo Fadanni*1, Rosalba Pacelli2, Alberto Zucchetta2, Pietro Rotondo3, Michele Allegra1,5

1Physics and Astronomy Department, University of Padova, Padova, Italy
2Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova, Italy
3Department of Mathematical, Physical and Computer Sciences, University of Parma, Parma, Italy
4Padova Neuroscience Center, University of Padova, Padova, Italy

*Email:jacopo.fadanni@unipd.it


Introduction
Recent technical breakthroughs have enabled a rapid surge in the number of neurons that can be simultaneously recorded[1,2] calling for the development of robust methods to investigate neural activity at a population level.
In this context, it is becoming increasingly important to characterize the neural activity manifold, the set of configurations visited by the network within the Euclidean space defined by the instantaneous firing rates of all neurons[3]. A key parameter of the manifold geometry is its intrinsic dimension (ID), the number of coordinates needed to describe the manifold. While several studies suggested that the ID may be typically low, contrasting findings have disputed this statement, leading to a wide debate [1,2,3,4].
Methods
In this study we present a variant of the Full Correlation Integral (FCI), an ID estimator that was shown to be particularly robust under undersampling and high dimensionality, improving over the classical Correlation Dimension estimator[5].Our variant overcomes the limitation of standard FCI in the case of curvature effects doing a local estimation of the true IDas the peak in the distribution of local estimates. Crucially, local estimates are restricted to approximately flat neighborhoods, as determined by a suitable local parameter, which allows us to avoid overestimation. Our procedure yields a robust estimator for typically challenging situations encountered with neural manifolds.
Results
We proved the reliability of our metric by testing it in two significantly challenging cases. First, we used it to characterize neural manifolds of RNNs performing simple tasks[6], where strong curvature effects generally lead to overestimates. Second, we used it on a benchmark dataset including non-linearly embedded high-dimensional neural data, where all other methods yield underestimates[7]. In Figure 1 we show a comparison between our method and other available methods for the RNN and for the high-dimensional neural data. Linear methods overestimate the ID in the case of curved manifolds, while nonlinear methods underestimate the ID in the case of high-dimensional manifolds. In both situations, our method performed well.

Discussion
Proposing a robust estimator for the ID, our work adds a relevant tool in the open debate about the dimensionality of neural manifolds.
The intrinsic properties of the FCI estimator make it robust to undersampling and high dimensionality, avoiding the so-called ‘curse of dimensionality’ effects. Our local variant makes it robust also for curved manifolds where the ID and the embedding dimension strongly differ. Limitations of our method arise only in extremely non-uniformly sampled manifolds, where the conditions for the applicability of the FCI are unfulfilled[5].
Our method is an important step forward in the current research on neural manifolds, and it is thus of interest to the computational neuroscience community at large.





Figure 1. Left, an example of network activity projected onto the first 3 PCs. IDFCI = 2.1; IDPA = 7; MLE = 3.6. IDTwoNN = 7.1 Right: comparison between different ID estimators in the case of high-dimensional manifolds linearly embedded[7]. Our method performs well for all the dimensionality
Acknowledgements
This work was supported by PRIN grant 2022HSKLK9, CUP C53D23000740006, “Unveiling the role of low dimensional activity manifolds in biological and artificial neural networks”

References
● https://doi.org/10.1038/s41586-019-1346-5
● https://doi.org/10.1126/science.aav7893
● https://doi.org/10.1016/j.conb.2021.08.002

● https://doi.org/10.1038/s41593-019-0460-x
● https://doi.org/10.1038/s41598-019-53549-9
● https://doi.org/10.1038/s41593-018-0310-2
● https://doi.org/10.1371/journal.pcbi.1008591



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P078: Single neuron representational drift in CA1 can be explained by aligning stable low dimensional manifolds
Sunday July 6, 2025 17:20 - 19:20 CEST
P078 Single neuron representational drift in CA1 can be explained by aligning stable low dimensional manifolds

Elena Faillace*1, Mary Ann Go1, Juan Álvaro Gallego1, Simon Schultz1

1Centre for Neurotechnology and Department of Bioengineering, Imperial College London, UK

*Email: elena.faillace20@imperial.ac.uk

Hippocampal place cells are believed to form a cognitive map that supports spatial navigation. However, their spatial tuning has been observed to ‘remap’, i.e. the representation drifts over time, even in the same environment [1]. This raises the question of how a robust and consistent experience is maintained despite continual remapping at the single cell level. Furthermore, it remains unclear whether this drift is coordinated across neurons, and how tuning curves profiles evolve. Here, we propose a population-level approach to identify a stable representation of environments and provide a framework to predict the activity of remapped tuning curves.


We performed two-photon calcium imaging to record the activity of hundreds of neurons in CA1 of head-fixed mice during a running task (Fig.a,b) [2]. Mice expressing GCaMP6s were habituated to a circular track for 7-9 days, followed by 3 days of recordings. All environments had the same circular structure but differed in the visual cues along the walls. During imaging, mice were exposed to two familiar environments, one novel environment, and one familiar environment with inverted order of the symbols on the walls. Neurons were longitudinally registered across sessions using CaImAn.


We used linear dimensionality reduction techniquesto find session-specific manifolds that spanned the coordinated activity of CA1 cells (Fig.c). Using a combination of PCA and canonical correlation analysis (CCA)[3,4], we were able to align these session-specific manifolds (Fig.d) across days, environments, and even mice, achieving robust decoding of the animal's position along the track (Fig.h). Moreover, using this aligned manifold, we could predict the remapping of single neuron tuning curves (Fig.e,f,g), even for those excluded when computing the alignment procedure.


This work supports the perspective that neural manifolds serve as a stable basis for neural encoding [3,4]. We present a framework in which representational drift, traditionally viewed as unstructured, can be interpreted as a coordinated adaptation at a population level, enabling the prediction of tuning curve profiles for ‘unseen’ neurons. Importantly, we did not need to categorise or select neurons based on their functional classes (e.g., place cells), thereby acknowledging their collective contribution to a preserved manifold space.



Figure 1. (a,b): schematic of experiment set-up and Ca+ imaging. Previously presented in [2]. (c): PCA of the average firing rates from different sessions concatenated. d. PCA space after each recording has been projected to a common PC space (alignment). (e,f): example of tuning curves pre and after alignment and their correlation and L2 norm (g). (h): Same as (d), colour coded by angular position.
Acknowledgements

References
[1]https://doi.org/10.1038/nn.3329
[2]https://doi.org/10.3389/fncel.2021.618658
[3]https://doi.org/10.1038/s41593-019-0555-4
[4]https://doi.org/10.1038/s41586-023-06714-0


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P079: Characterizing optimal communication in the human brain
Sunday July 6, 2025 17:20 - 19:20 CEST
P079 Characterizing optimal communication in the human brain

Kayson Fakhar*1,2,Fatemeh Hadaeghi2, Caio Seguin3, Alessandra Griffa4, Shrey Dixit2,5, Kenza Fliou2,6, Arnaud Messé2, Gorka Zamora-López7,8, Bratislav Misic9, Claus Hilgetag2,10


1MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
2Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of Neuroscience, Germany.
3Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
4Leenaards Memory Center, Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Montpaisible 16, 1011 Lausanne, Switzerland
5Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
6Sorbonne University, Paris, France.
7Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain.
8Department of Information and Communication Technologies, Pompeu Fabra University, Barcelona, Spain.
9McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, Canada.
10Department of Health Sciences, Boston University, Boston, MA, USA.
*Email: kayson.fakhar@mrc-cbu.cam.ac.uk
Introduction

Efficient communication is shown to be a key characteristic of the organization of the human brain(Chen et al., 2013). In fact, it was found to bias the wiring economy of the brain networks in its favour by allocating expensive long-range shortcuts among its hubs(van den Heuvel et al., 2012). However, communication efficiency is often defined through specific signalling models — such as routing along shortest paths, broadcasting via parallel pathways, or diffusive random-walk dynamics — that omit important biological aspects of brain dynamics, including conductance delays, oscillations, and inhibitory interactions(Seguin et al., 2023). As a result, a more general framework is needed to characterize optimal signal transmission within a given brain network and to assess whether actual brain communication is truly efficient.


Methods
Here, we introduce a model-agnostic framework based on multi-site virtual lesions in large-scale neural mass models. Our approach uses a game-theoretical perspective: each brain region seeks to maximize its influence over others, subject to constraints from underlying network structure and local dynamics. This perspective yields a mathematically rigorous definition of optimal communication given any model of local dynamics on any arbitrary network structure. We used a linear, nonlinear, and oscillatory neural mass model and compared the resulting optimal influence patterns with those derived from abstract models of signalling, i.e., routing, navigation, broadcasting, and diffusion.


Results
Our results are as follows: First, we found that the broadcasting regime has the closest resemblance to the optimal communication patterns derived from game theory. Second, although the underlying structural connection weight reliably predicts the efficiency of communication between regions, it fails to capture the true influence of weakly connected hub regions. In other words, hubs harness their rich connectivity to broadcast their signal over multiple pathways when they lack a reliable direct connection to their targets. Further comparisons with functional connectivity (fMRI-based correlations) and cortico-cortical evoked potentials reveal two additional insights: (i) functional connectivity is a poor indicator of actual information exchange; and (ii) brain communication is likely to take place close to optimal levels.


Discussion
Altogether, this work provides a rigorous, versatile framework for characterizing optimal brain communication, identifies the most influential regions in the network, and offers further evidence supporting efficient signalling in the brain.



Acknowledgements

This work is in part funded by the German Research Foundation (DFG)-SFB 936-178316478-A1; TRR169-A2; SPP 2041/GO 2888/2-2 and the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510.
References
Chen, Y., Wang, S., Hilgetag, C. C., & Zhou, C. (2013). Trade-off between multiple constraints enables simultaneous formation of modules and hubs in neural systems.PLoS Comput. Biol.,9(3), e1002937.
Seguin, C., Sporns, O., & Zalesky, A. (2023). Brain network communication: Concepts, models and applications.Nat. Rev. Neurosci.,24(9), 557–574.
van den Heuvel, M. P., Kahn, R. S., Goñi, J., & Sporns, O. (2012). High-cost, high-capacity backbone for global brain communication.Proc. Natl. Acad. Sci. U. S. A.,109(28), 11372–11377.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P080: A Representation Learning approach captures clinical effects of slow subthalamic beta activity drifts
Sunday July 6, 2025 17:20 - 19:20 CEST
P080 A Representation Learning approach captures clinical effects of slow subthalamic beta activity drifts

Salvatore Falciglia*1,2, Laura Caffi1,2,3,4, Claudio Baiata3,4, Chiara Palmisano3,4, Ioannis U. Isaias3,4, Alberto Mazzoni1,2

1The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
2Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
3University Hospital Würzburg and Julius Maximilian University of Würzburg, Würzburg, Germany
4Parkinson Institute Milan, ASST G. Pini-CTO, Milan, Italy

*Email: salvatore.falciglia@santannapisa.it

Introduction

Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a mainstay treatment for drug-resistant Parkinson's disease (PD) [1]. Adaptive DBS (aDBS) dynamically adjusts stimulation according to the beta ([12-30] Hz) power of the STN local field potentials (LFPs) to match the patient's clinical status [2]. Today, aDBS control depends on accurate determination of pathological beta power thresholds [3]. Notably, in the days/months timescale, STN beta power shows irregular temporal drifts affecting the long-term efficacy of the aDBS treatment. Here we aim at characterizing these drifts and their clinical effects with a multimodal study, integrating neural and non-neural data streams.

Methods
We conducted home monitoring of patients with PD, focusing on periods of rest and gait activity. Multimodal data were collected, including STN LFPs from chronically implanted DBS electrodes, wearable inertial sensor recordings, and patient-reported diaries. A low-dimensional feature space was derived by integrating the acquired signals through Representation Learning techniques [4]. Leveraging LAURA, our transformer-based framework for predicting the long-term evolution of subthalamic beta power under aDBS therapy [5], we present a multimodal approach where neural data are paired with kinematic data and labelled according to the patient’s clinical status during the monitored activity.
Results
We observed that STN beta power distributions show large irregular non-linear fluctuations over several days. Consequently, patients spend a significant portion of time in suboptimal stimulation states. A fully informative description of the STN LFPs dynamics is achieved by integrating neural, kinematics, and clinical data into a low-dimensional feature-based representation. Latent patterns of STN activity correlate with clinical outcomes as well as motor and non-motor daily activities, necessitating further explainability within the same low-dimensional space. This might support clinically effective recalibration of aDBS parameters on a daily basis.
Discussion
Our study advances the understanding of slow timescales of pathological activity in PD patients implanted with DBS. We developed a comprehensive deep learning framework that integrates neural data with longitudinal clinical information, enabling a more precise characterization of patient status. This will enable personalized control strategies for stimulation parameters (Fig. 1) and enhance the clinician-in-the-loop paradigm by improving patient status assessment and automating aspects of neuromodulation to prevent suboptimal stimulations due to beta power drifts. Ultimately, this work paves the way for novel long-term neuromodulation strategies with potential applications to neurological disorders beyond PD [6].




Figure 1. Block diagram of aDBS as a closed-loop control system. The control loop operates on two separate timescales. In the short-term, the modulation changes with fluctuations in beta power (solid box). In the long-term, the parameters of the fast aDBS algorithm are updated based on the expected drifts of daily beta distributions combined with the neurologist’s clinical assessments (dashed box).
Acknowledgements
The authors declare that financial support was received for the research. The European Union - Next-Generation EU - NRRP M6C2 - Investment 2.1: projects IMAD23ALM MAD, Fit4MedRob, and BRIEF. Fondazione Pezzoli per la Malattia di Parkinson. The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 424778381 - TRR 295.

References
1.https://doi.org/10.1007/s00221-020-05834-7
2.https://doi.org/10.1088/1741-2552/ac3267
3.https://doi.org/10.3390/bioengineering11100990
4.https://doi.org/10.1109/TPAMI.2013.50
5.https://doi.org/10.1101/2024.11.25.24317759
6.https://doi.org/10.3389/fnhum.2024.1320806




Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P081: Unsupervised Dynamical Learning in Recurrent Neural Networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P081 Unsupervised Dynamical Learning in Recurrent Neural Networks

Luca Falorsi*1,2, Maurizo Mattia2, Cristiano Capone2

1PhD program in Mathematics, Sapienza Univ. of Rome, Piazzale Aldo Moro 5, Rome, Italy
2Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanit\`a, Viale Regina Elena 299, Rome, Italy

*Email: luca.falorsi@gmail.com


Introduction
Humans and other animals rapidly adapt their behavior, indicating that the brain can dynamically reconfigure its internal representations in response to changing contexts. We introduce a framework grounded in predictive coding theory [1] that integrates reservoir computing [2] and latent variable models, in which a recurrent neural network learns to reproduce sequences while structuring a latent state-space without direct contextual labels, unlike standard approaches that rely on explicit context vectors [3]. We achieve this by redefining the readout mechanism of an echo state network (ESN) [2] as a latent variable model that adapts via gain modulation to track and reproduce the ongoing in-context sequence.
Methods
An ESN processes sequence examples from a related set of tasks, extracting high-dimensional, nonlinear temporal features. In the first learning phase, we train an encoder network, acquiring a low-dimensional latent space from reservoir activity elicited by varying inputs. Synaptic weights W are optimized offline to map reservoir responses into the latent space. One simple and effective solution is to use principal component analysis (PCA).
When presented with a novel sequence associated to a new context, the latent projections are linearly recombined using gain variables g. These gain variables represent latent features of the current context, dynamically adapting to minimize the (time-discounted) prediction error.
Results
We evaluate our architecture on datasets of periodic trajectories, including testing its ability to trace triangles with different orientations (Fig. 1). The encoder is trained offline using PCA on three predefined orientations and tested on previously unseen ones. Our results show that the network generalizes well across the task family, accurately reproducing unseen sequences. When presented with a novel sequence, the readout dynamically adapts in-context, adjusting gain parameters to optimally recombine the principal components based on prediction error feedback (nudging phase). After the gain parameters stabilize, feedback is gradually removed, and the network autonomously reproduces the sequence (closed-loop phase).
Discussion
The proposed framework decomposes the readout mechanism in a recurrent neural network into fixed synaptic components shared across a task family and a dynamic component that adapts in response to contextual feedback. During online adaptation, the network behaves as a gain-modulated reservoir, where gain variables adjust in response to prediction errors [4]. This aligns with biological evidence that top-down dendritic inputs modulate neuronal gain, shaping context-dependent responses [5]. Our approach offers insights into motor control, suggesting that gain modulation enables the flexible recombination of movement primitives [6]—akin to muscle synergies, which organize motor behaviors through structured activation patterns [7].



Figure 1. Figure 1: A Trajectory of network output during the dynamical adaptation phase on novel trajectories. B Principal components (PC) of the learned gain parameters g. The architecture infers the underlying latent task geometry, correctly representing the 120° rotation symmetry. C Mean square reconstruction error (MSE) for closed loop phase. Dashed lines represent standard deviation over 10 trials.
Acknowledgements
LF aknowledges support by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing and Sapienza University of Rome (AR12419078A2D6F9).
MM and CC acknowledge support from the Italian National Recovery and Resilience Plan (PNRR), M4C2, funded by the European Union–NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy”)
References
1.https://doi.org/10.1098/rstb.2008.0300

2.https://doi.org/10.1126/science.1091277

3.https://doi.org/10.1103/PhysRevLett.125.088103


4.https://doi.org/10.48550/arXiv.2404.07150

5.https://doi.org/10.1093/cercor/bhh065

6.https://doi.org/10.1038/s41593-018-0276-0

7.https://doi.org/10.1038/nn1010
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P082: Temporal Dynamics of Inter-Spike Intervals in Neural Populations
Sunday July 6, 2025 17:20 - 19:20 CEST
P082 Temporal Dynamics of Inter-Spike Intervals in Neural Populations

Luca Falorsi*1,2, Gianni v. Vinci2, Maurizio Mattia2

1PhD program in Mathematics, Sapienza Univ. of Rome, Piazzale Aldo Moro 5, Rome, Italy
2Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanità, Viale Regina Elena 299, Rome, Italy

*Email: luca.falorsi@gmail.com


Introduction

The study of inter-spike interval (ISI) distributions in neuronal populations plays a crucial role in linking theoretical models with experimental data [1, 2]. As an experimentally accessible measure, ISI distributions provide critical insights into how neurons code and process information [3–5]. However, characterizing these distributions in populations of spiking neurons far from equilibrium remains an open issue. In this work, we develop a population density framework [6–8] to study the joint dynamics of the time from the last spike (τ) and the membrane potential (v) in homogeneous networks of integrate-and-fire neurons.


Methods

We model the network dynamics using a population density approach, where a joint probability distribution describes the fraction of neurons with membrane potential (v) and elapsed time (τ) since their last spike. This distribution evolves according to a two-dimensional Fokker-Planck partial differential equation (PDE), allowing us to systematically analyze how single-neuron ISI distributions change over time, including nonstationary conditions driven by external inputs or network interactions. To further characterize ISI statistics, we derive a hierarchy of one-dimensional PDEs describing the evolution of ISI moments and analytically study first-order perturbations from the stationary state, providing first-order corrections to renewal theory.


Results

As a first step, we analytically solve the relaxation dynamics towards the steady state for an uncoupled population of neurons, obtaining an explicit expression for the time-dependent ISI. We then show, through numerical simulations, that the introduced equation correctly captures the time evolution of the ISI distribution, even when the population significantly deviates from its stationary state, such as in the presence of limit cycles or time-varying external stimuli (Fig. 1). Additionally, by self-consistently incorporating the sampled empirical firing rate, the resulting stochastic Fokker-Planck equation describes finite-size fluctuations. Spiking network simulations show an excellent agreement with the numerical integration of the PDE.


Discussion

We connect our novel population density approach to the Spike Response Model (SRM) [10], demonstrating that marginalizing over v recovers the Refractory Density Method (RDM) [11]. However, the marginal equation remains unclosed, and both SRM and RDM rely on a quasi-renewal approximation based on the stationary ISI distribution.
In conclusion, we developed an analytic framework to characterize ISI distributions in nonstationary regimes. Our approach, validated through simulations, bridges theoretical models with experimental observations. Furthermore, this work paves the way for analytically studying synaptic plasticity mechanisms that depend on the timing of the last spike, such as spike-timing-dependent plasticity.




Figure 1. ISI dynamics in an excitatory limit cycle (same parameters as [9] ). Comparing Spiking Neural Network simulations (SNN) with Fokker-Planck equation (FP) and its stochastic version (SFP). Time is measured in units of the membrane time constant τ_m=20ms. A Phase-dependent ISI distribution. B Trajectory of the firing rate and the first moment of the ISI. C Time averaged ISI distribution.
Acknowledgements
LF aknowledges support by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing and Sapienza University of Rome (AR12419078A2D6F9).


MM and GV acknowledge support from the Italian National Recovery and Resilience Plan (PNRR), M4C2, funded by the European Union–NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy”)


References
1.https://doi.org/10.1016/s0006-3495(64)86768-0
2.https://doi.org/10.2307/3214232
3.https://doi.org/10.1523/JNEUROSCI.13-01-00334.1993
4.https://doi.org/10.1103/PhysRevLett.67.656
5.https://doi.org/10.1523/JNEUROSCI.18-10-03870.1998
6.https://doi.org/10.1162/089976699300016179
7.https://doi.org/10.1162/089976600300015673
8.https://doi.org/10.1103/PhysRevE.66.051917|
9.https://doi.org/10.1103/PhysRevLett.130.097402
10.https://doi.org/10.1103/PhysRevE.51.738
11.https://doi.org/10.1016/j.conb.2019.08.003
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P083: Network Dynamics and Emergence of Synchronisation in A Population of KNDy Neurons
Sunday July 6, 2025 17:20 - 19:20 CEST
P083 Network Dynamics and Emergence of Synchronisation in A Population of KNDy Neurons

Saeed Farjami*1,2, Margaritis Voliotis1,2, Krasimira Tsaneva-Atanasova1,2,3

1Department of Mathematics and Statistics, University of Exeter, Exeter, United Kingdom
2Living Systems Institute, University of Exeter, Exeter, United Kingdom
3EPSRC Hub for Quantitative Modelling in Healthcare, University of Exeter, Exeter, United Kingdom

*Email: s.farjami@exeter.ac.uk


Introduction
Regulation of the reproductive axis critically depends on the gonadotropin-releasing hormone (GnRH) pulse generator. A neuron population in hypothalamic arcuate nucleus co-expressing kisspeptin, neurokinin B and dynorphin (KNDy) plays a key role in generating and maintaining pulsatile GnRH release [1]. While previous research has characterised electrophysiological properties and firing patterns of single KNDy neurons [2], mechanisms governing their network dynamics, particularly the processes underlying synchronisation and burst generation, remain incompletely understood. Recent studies [3,4] have explored how network interactions contribute to the emergence of synchronised activity, but many aspects of the regulatory mechanisms remain elusive.

Methods
We have recently developed a biophysically realistic Hodgkin-Huxley-type model of a single KNDy neuron that incorporates comprehensive electrophysiological properties and calcium dynamics [2]. In this study, we refine this model to better capture experimentally observed features such as the current-frequency response. Building on this, we construct a computational model of a biologically realistic KNDy neuron network, incorporating both fast glutamate-mediated synaptic coupling and slower neuromodulatory interactions via neurokinin B (NKB) and dynorphin (Fig. 1). This fast-slow timescale coupling allows us to investigate the complex interplay between fast and slow synaptic dynamics in regulating network behaviour.
Results
We explore how network structure and neuronal interactions give rise to emergent bursting and synchronisation. Specifically, we assess the impact of connectivity patterns, functional heterogeneity, and glutamate signalling, as well as the distinct roles of NKB and dynorphin in shaping network dynamics. Our results reveal how different signalling pathways contribute to the initiation, maintenance, and termination of both ‘miniature’ and full synchronisation events. In particular, we show how glutamate, acting on a fast timescale, might play a crucial role in triggering synchronisation, whereas slower neuropeptide-mediated interactions via NKB and dynorphin contribute to the propagation and termination of these events.
Discussion
Our findings provide novel insights into the collective behaviour of KNDy neurons, bridging the gap between single-cell dynamics and network-level emergent dynamics. This work,building on previous studies,advances our understanding of how KNDy neuron networks generate and regulate GnRH pulsatile activity. Furthermore, our results offer testable hypotheses for experimental studies, guiding future research using state-of-the-art neurobiological techniques to validate computational predictions. In the long term, understanding KNDy network dynamics could inform the development of treatments for reproductive disorders linked to GnRH pulse generator dysfunction.




Figure 1. Figure 1: A schematic description of a network structure of KNDy neurons and their cell-cell interactions either through glutamate neurotransmitter or neurokinin B (NKB) and dynorphin neuropeptides (A) and feedback mechanisms among these agents (B) giving rise to GnRH pulses in GnRH neurons which in return dictate other hormonal pulsatility.
Acknowledgements
Gratefully, we acknowledge BBSRC for financial support of this study via grants BB/W005883/1 and BB/S019979/1.
References
[1]https://doi.org/10.1210/en.2010-0022.
[2]https://doi.org/10.7554/eLife.96691.4.
[3]https://doi.org/10.1016/j.celrep.2022.111914.
[4]https://doi.org/10.1371/journal.pcbi.1011820.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P084: Gamma Oscillation in the Basal Ganglia: Interplay Between Local Inhibition and Beta Synchronization
Sunday July 6, 2025 17:20 - 19:20 CEST
P084 Gamma Oscillation in the Basal Ganglia: Interplay Between Local Inhibition and Beta Synchronization

Federico Fattorini*1,2, Mahboubeh Ahmadipour1,2, Enrico Cataldo3, Alberto Mazzoni1,2, Nicolò Meneghetti1,2


1The Biorobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
2Department of Excellence for Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy

3Department of Physics, University of Pisa, Pisa, Italy



*Email:federico.fattorini@santannapisa.it
Introduction

Basal ganglia (BG) gamma oscillations (30-100 Hz) have been proposed as valuable biomarkers for guiding adaptive deep brain stimulation in Parkinson’s disease (PD) [1], offering a reliable alternative to beta oscillations (10-30 Hz). However, the origins of gamma oscillations in these structures remain poorly understood. Using a validated spiking network model of the BG [2], we identified striatal and pallidal sources of gamma oscillations. We found that their generation relied on self-inhibitory feedback within these populations and was strongly influenced by interactions with pathological beta oscillations. Our findings provide new insights into the generation of BG gamma oscillations and their role in PD pathology.


Methods
The BG model (Fig. 1A) included approximately 14000 neurons divided into 6 populations: D1 and D2 medium spiny neurons, fast-spiking neurons, prototypic (GPe-TI) and arkypallidal populations of external globus pallidus and subthalamic nucleus. We utilized non-linear integrate-and-fire neurons, using population-specific parameters. The transition from healthy to Parkinsonian conditions was simulated with a dopamine depletion parameter that increased the input to D2. The origins of gamma oscillations were explored by selectively disconnecting model projections and isolating nuclei that exhibited gamma activity. Interactions with pathological beta oscillations were analyzed by studying phase-frequency and phase-amplitude coupling.
Results
We identified two distinct gamma oscillations in our model (Fig. 1B): high-frequency (≈100 Hz) gamma in GPe-TI and slower (≈70 Hz) ones in D2 medium spiny neurons. While GPe-TI gamma oscillations were prominent in healthy and pathological states, D2 oscillations emerged under dopamine-depleted conditions. Both rhythms required self-inhibition within the corresponding nuclei to be generated. However, this mechanism alone could not account for all gamma dynamics. Beta oscillations, generated by the model under pathological conditions, affected GPe-TI gamma frequency via phase-frequency coupling and amplified D2 gamma activity through phase-amplitude coupling. Both interactions were mediated by beta-induced modulation of spiking activity.

Discussion
By employing a computational model of the BG, we offered a comprehensive explanation of gamma rhythmogenesis in these structures, identifying two sources: D2 and GPe-TI. Our results were consistent with experimental findings from both rat [3] and human local field potentials [4] and aligned with the results of other computational models [5]. We also clarified how these rhythms were generated through self-inhibition within these nuclei and how they interacted with pathological beta synchronization. Our insights into the mechanism behind gamma generation in BG represent a crucial step toward advancing our understanding of PD and improving their potential as biomarkers for adaptive deep brain stimulation.





Figure 1. A) Computational model of the basal ganglia: FSN (striatal spiking interneurons), D1/D2 (medium spiny neurons with D1 and D2 dopamine receptors), GPe-TA/TI (arkypallidal/prototypic populations of the globus pallidus externa), and STN (subthalamic nucleus). B) Power spectral densities (PSDs) of GPe-TI (top) and D2 (bottom) activities under healthy and Parkinsonian (PD) conditions.
Acknowledgements
This work was supported by the Italian Ministry of Research, in the context of the project NRRP “Fit4MedRob-Fit for Medical Robotics” Grant (# PNC0000007).
References



1.https://doi.org/10.1038/s41591-024-03196-z

2.https://doi.org/10.1371/journal.pcbi.1010645

3.https://doi.org/10.1111/cns.14241

4.https://doi.org/10.1016/j.expneurol.2012.07.005


5.https://doi.org/10.1523/JNEUROSCI.0419-23.2023

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P085: Biological validation of a computational model of nitric oxide dynamics by emulating the nitric oxide diffusion experiment in the endothelium
Sunday July 6, 2025 17:20 - 19:20 CEST
P085 Biological validation of a computational model of nitric oxide dynamics by emulating the nitric oxide diffusion experiment in the endothelium

Pablo Fernández-López1, Ylermi Cabrera-León1, Patricio García Báez1,2, Scott McElroy3, Salvador Dura-Bernal3andCarmen Paz Suárez-Araujo*1
1Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Parque Científico Tecnológico, Campus Universitario de Tafira, Las Palmas de Gran Canaria, 35017, Canary Islands, Spain.
2Departamento de Ingeniería Informática y de Sistemas, Universidad de La Laguna, Camino San Francisco de Paula, 19, Escuela Superior de Ingeniería y Tecnología, San Cristóbal de La Laguna, 38200, Canary Islands, Spain.
3State University of New York (SUNY) Downstate Health Sciences University, 450 Clarkson Avenue, Brooklyn, NY, USA 11203.



*Email: carmenpaz.suarez@ulpgc.es

Understanding how the brain works, how it is structured and how it computes is one of the goals of computational neuroscience. An essential step in this direction is to understand the cellular communication that enables the transition from nerve cells to cognition.
It is now accepted that the links between neurons are not only established by synaptic connection, but also by the confluence of different cellular signals that affect global brain activity, with the underlying mechanism being the diffusion of neuroactive substances into the extracellular space (ECS). One of these substances is the free radical gas nitric oxide (NO), which, in turn, determines a new type of information transmission: the volume transmission (VT). VT is a non-simple form of short- and long-distance communication that acts not only as a microenvironment to separate nerve cells, but also as an information channel [1, 2]. NO is a signaling molecule that is synthesized in a number of tissues by NO synthases and has the ability to regulate its own production. It is lipid soluble, membrane permeable and has a high diffusivity in both aqueous and lipid environments.
In the absence of definitive experimental data to understand how NO functions as a neuronal signalling molecule, we have developed a computational model of NO diffusion based on non-negative and compartmental dynamical systems and transport phenomena [3].
The proposed model has been validated in the biological environment, specifically in the endothelium. In this work, the biological validation is approached by reproducing the experiment performed by Tadeuzs et al, 1993 [4] on NO diffusion in the aorta. We implement our model with two compartments, using real measurements of NO synthesis and diffusion processes in the endothelial cell and in the smooth muscle cells of the aorta at a distance of 100 ± 2 µm between them. A fitting procedure to the observed NO dynamics was executed, and hypothesis related to the different processes in the NO dynamics were provided.
Our results provide evidence that the compartmental model of NO diffusion has allowed the design of a computational framework [5] to study and determine the dynamics of synthesis, diffusion and self-regulation of NO in the brain and in artificial environments. We have also shown that this model is very powerful because it allows to incorporate all the biological features and existing constraints in NO release and diffusion and in the environment where NO diffusion processes take place.
Finally, it has been shown that our model is an important tool for designing and interpreting biological experiments on the underlying processes of NO dynamics, NO behaviour and its impact on both brain structure and function and artificial neural systems.





Acknowledgements
This work has been funded by the Consejería de Vicepresidencia 1ª y de O. P., Inf., T. y M. del Cabildo de GC under Grant Nº “23/2021”, as well as by the ‘Marie Curie Chair’ under Grant Nº “38/2023”, and ‘Marie Curie Chair’ under Grant Nº “CGC/2024/9655”.The latter was funded by the Consejeria de Vicepresidencia 1ª y de Gobierno de O. P. e Inf., Arq. y V. del Cabildo de GC.
References
[1] https://doi.org/10.1177/107385849700300113.
[2] https://doi.org/10.1016/j.neuroscience.2004.06.077.
[3] https://doi.org/10.1007/978-3-319-26555-1_59.
[4] https://doi.org/10.1006/bbrc.1993.1914.
[5] https://doi.org/10.1063/1.1291268.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P086: Synergistic short-term synaptic plasticity mechanisms for working memory
Sunday July 6, 2025 17:20 - 19:20 CEST
P086 Synergistic short-term synaptic plasticity mechanisms for working memory

Florian Fiebig*1, Nikolaos Chrysanthisdis1, Anders Lansner1,2, Pawel Herman1,3

1KTH Royal Institute of Technology, Dept of Computational Science and Technology, Stockholm, Sweden
2Stockholm University, Department of Mathematics, Stockholm, Sweden
3Digital Futures, KTH Royal Institute of Technology
*Email: fiebig@kth.se




Introduction

Working memory (WM) is essential for almost every cognitive task. The neural and synaptic mechanisms supporting the rapid encoding and maintenance of memories in diverse tasks are the subject of ongoing debate. The traditional view of WM as stationary persistent firing of selective neuronal populations has given room to newer ideas for mechanisms that support a more dynamic maintenance of multiple items, that may also tolerate activity disruption. Computational WM models based on different biologically plausible synaptic and neural plasticity mechanisms have been proposed but not combined systematically. Monolithic models (WM function explained by one particular mechanism) are theoretically appealing but also narrow explanations.
Methods
In this study we evaluate the interactions between three commonly used classes of plasticity: Intrinsic excitability (postsynaptic, increasing the excitability of spiking neurons), synaptic facilitation/augmentation (presynaptic, potentiating outgoing synapses of spiking neurons) and Hebbian plasticity (pre-post-synaptic, potentiating recurrent synapses driven by correlations), see Fig.1. Combinations of these mechanisms are systematically tested in a spiking neural network model on a broad suite of tasks or functional motifs deemed principally important for WM operation, such as one-shot encoding, free and cued recall, delay maintenance and updating. In our evaluation we focus on operational task performance and biological plausibility.
Results
We show that previously proposed short-term plasticity mechanisms may not necessarily be competing explanations, but instead yield interesting functional interactions on a wide set of WM tasks and enhance the biological plausibility of spiking neural network models. Our results indicate that a composite model, combining several commonly proposed plasticity mechanisms for WM function, is superior to more reductionist variants. Importantly, we attribute the observable differences to the principle nature of specific types of plasticity. For example, we find a previously undescribed synergistic function of Hebbian plasticity that supports the rapid updating of multi-item WM sets through rapidly learned inhibition.
Discussion
Our study suggests that commonly used forms of plasticity proposed for the buffering of WM information besides persistent activity are eminently compatible, and yield synergies that improve function and biological plausibility in a modular spiking neural network model. Combinations enable a more holistic model of WM responsive to broader task demands than what can be achieved with more reductionist models. Conversely, the targeted ablation of specific plasticity components reveals that different mechanisms are differentially important to specific aspects of WM function, advancing the search for more capable, robust and flexible models accounting for new experimental evidence of bursty and activity-silent multi-item maintenance.




Figure 1. Fig.1-Plasticity Combinations. The Augmentation plasticity model is implemented using the well-known Tsodyks-Makram mechanism [1]. The Bayesian Confidence Propagation Neural Network (BCPNN) learning rule implements intrinsic plasticity, as well as Hebbian plasticity [2]. These 3 components can be simulated separately or together, yielding 7 scenarios to simulate and study.
Acknowledgements
We would like to thank the Swedish Research Council (VR) grants: 2018-05360 and 2016-05871, Digital Futures and Swedish e-science Research Center (SeRC) for their support.
References
Tsodyks, M., Pawelzik, K., & Markram, H. (1998). Neural Networks with Dynamic Synapses.Neural Computation,10(4), 821–835.
Tully, P. J., Hennig, M. H., & Lansner, A. (2014). Synaptic and nonsynaptic plasticity approximating probabilistic inference.Frontiers in Synaptic Neuroscience,6, 8.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P087: Structured Inhibition and Excitation in HVC: A Biophysical Approach to Song Motor Sequences
Sunday July 6, 2025 17:20 - 19:20 CEST
P087 Structured Inhibition and Excitation in HVC: A Biophysical Approach to Song Motor Sequences

Fatima H. Fneich*1, Joseph Bakarji2,Arij Daou1

1Biomedical Engineering Program, American University of Beirut, Lebanon
2Department of Mechanical Engineering, American University of Beirut, Lebanon

*Email: fhf07@mail.aub.edu

Introduction
Stereotyped neural sequencesoccurin thebrain [1], yet the neurophysiological mechanisms underlying their generation remain unclear. Birdsong is a prominent model to study such behavior, as juvenile songbirds learn from tutors andlaterproduce stereotyped song patterns. The premotor nucleus HVC coordinates motor and auditory activity for learned vocalizations.HVC consists of three neural populations with distinct in vitro and in vivo electrophysiologicalresponses [2,3]. Existing models explain HVC’s networkusingintrinsic circuitry, extrinsic feedback, or both. Here, we develop a physiologically realistic neural network model incorporating the three classes of HVC neurons basedonpharmacologicallyidentifiedion channels and synaptic currents.
Methods
We developed a conductance-based Hodgkin-Huxley-type model of HVC neurons and connected them via biologically realistic synaptic currents. The network was structured as a feedforward chain of microcircuits encoding sub-syllabic song segments, interacting through structured feedback inhibition[4]. Simulations were performed using MATLAB’s ode45 solver, incorporating key ionic currents, including T-type Ca²⁺, Ca²⁺-dependent K⁺, A-type K⁺, and hyperpolarization-activated inward current. Parameters were adjusted to replicate in vivo-like activity.The model reproduces sequential propagation of neural activity, highlighting intrinsic neuronal properties and synaptic interactions essential for song production.
Results
The model reproduced in vivo activity patterns of HVC neuron classes. HVCRA neuronsexhibitedsparse, time-locked bursts, each lasting ~10ms.HVCX neurons generated 1-4 bursts, typically following inhibitory rebound, while HVCINT neurons displayed tonic activity interspersed with bursts.Sequential propagation wasmaintainedthrough structured inhibition and excitation, with synapticconductancetuned to match dual intracellular recordings.The model accurately captured burst timing, spike shapes, and firing dynamicsobservedin experimental recordings, confirming its ability to simulate biologically realistic song-related neural activity.
Discussion
Our model provides a biophysically realistic representation of sequence generation in HVC, emphasizing the role of intrinsic properties and synaptic connectivity. The structured inhibition from HVCINT neurons ensured precise burst timing in HVCRA and HVCX neurons, supporting stable propagation. Key ionic currents, including T-type Ca²⁺ and A-type K⁺, regulated burst initiation and duration. These findings refine existing models by incorporating experimentallyobservedbiophysical details. This work offers new insights into the neural basis of motor sequence learning and could inform studies of other stereotyped behaviors.





Acknowledgements
This work was supported by the University Research Board (URB) and the Medical Practice Plan (MPP) grants at the American University of Beirut.

References
● https://doi.org/10.1038/nature09514
● https://doi.org/10.1038/nature00974
● https://doi.org/10.1152/jn.00162.2013
● https://doi.org/10.7554/eLife.105526.1



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P088: Amyloid-Induced Network Resilience and Collapse in Alzheimer’s Disease: Insights from Computational Modeling
Sunday July 6, 2025 17:20 - 19:20 CEST
P088 Amyloid-Induced Network Resilience and Collapse in Alzheimer’s Disease: Insights from Computational Modeling

Ediline L. F. Nguessap*1, Fernando Fagundes Ferreira1

1Department of Physics, University of São Paulo, Ribeirao Preto, Brazil

*Email: fonela@usp.br

Introduction

Alzheimer’s disease (AD) is characterized by progressive synaptic loss, neuronal dysfunction, and network disintegration due to amyloid-beta accumulation [1,2,3,4,5]. While experimental studies identifyamyloid-induced connectivity changes, the role of network resilience; the ability of the brain to maintain function despite synaptic loss remainspoorlyunderstood. Most computational models of AD either focus on static network properties (graph theory-basedapproaches) or single neuron dynamics[6], neglecting the interplay between progressive structural collapse and functional neuronal activity. Here, we model a small-world neuronal network and investigate its structural resilience and dynamical response to amyloid-driven synapse loss.

Methods
We construct a small-worldneuronalnetwork with synaptic weights evolving under amyloid-induced weakening. We track network resilience using key metrics: Largest Strongly Connected Component (LSCC) as a measure of global connectivity[7][8]. Global Efficiency, Clustering Coefficient, and Shortest Path Length to quantify functional resilience. To study functional neuronal activity, we simulate a network of Izhikevich neurons with synaptic coupling, observing how firing rates and synchronization evolve before, during, and after LSCC collapse. We further refine our model by removing isolated neurons and reducing background input when LSCC collapses, to ensure biological realism.
Results
Our simulations reveal a critical amyloid threshold (~75% synaptic loss) beyond which LSCC rapidly collapses, marking the transition from a functionally connected to a fragmented network. Small-world networks exhibit greater resilience than random ones, with LSCC persisting longer due to local clustering and efficient communication pathways. Global efficiency remains stable early on but drops sharply with LSCC collapse, while clustering initially increases (compensatory rewiring) before declining, indicating widespread disconnection. Neuronal firing desynchronizes post-collapse, aligning with cognitive dysfunction in AD, and removing isolated neurons accelerates activity decline, mimicking cortical atrophy.
Discussion
Our findings suggest that network topology plays a crucial role in Alzheimer’s resilience. As LSCC shrinks past a critical threshold, functional decline accelerates, aligning with AD progression. Neurons remain active but lose synchronization, suggesting that cortical regions stay active in late AD stages but fail to coordinate information transfer. Biologically inspired modifications (removing isolated neurons, reducing background input) enhance realism by preventing unrealistic activity after connectivity loss. This suggests that network vulnerability could serve as an AD biomarker. Future work should explore synaptic plasticity, tau pathology, and patient data (EEG, fMRI) for furtherimprovement.



Acknowledgements
FFF is supported by Brazilian National Council for Scientific and Technological Development (CNPq) 316664/2021-9. ELFN is supported by Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES).
References




https://doi.org/10.1016/j.neurobiolaging.2005.10.017

https://doi.org/10.1212/WNL.00000000000004012

https://doi.org/10.1371/journal.pone.0196402

https://doi.org/10.1089/ars.2023.0010

https://doi.org/10.3389/fnbeh.2014.00106

https://doi.org/10.1097/NEN.0b013e31824f1c1a

https://doi.org/10.1016/j.amc.2021.126372

https://doi.org/10.1038/s41598-019-42977-6

https://doi.org/10.1038/nrn2575



Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P089: Parameter identifiability in model-based inference for neurodegenerative diseases: noninvasive stimulation
Sunday July 6, 2025 17:20 - 19:20 CEST
P089 Parameter identifiability in model-based inference for neurodegenerative diseases: noninvasive stimulation

Jan Fousek*¹


¹ Central European Institute of Technology (CEITEC), Masaryk University, Brno, Czech Republic


*Email: jan.fousek@ceitec.muni.cz
Introduction

Tracking the trajectories of progression of patients with neurodegenerative diseases remains a challenging task. While employing connectome-based models can improve the performance of machine-learning-based classification [1], the identifiability of relevant parameters can be challenging when using only data features derived from spontaneous (resting state) data [2]. Here, in the context of Alzheimer disease (AD), we explore an alternative approach based on perturbations, namely using the response to single pulse transcranial magnetic stimulation recorded by EEG.

Methods
First, a whole-brain model using a normative human connectome was set up together with an EEG forward solution in order to replicate TMS evoked potential (TEP) [3] following precuneus stimulation [4]. Next, to define the trajectory of the AD in parameter space, we used previously established trajectory of progression of AD capturing how the evolution of the spatial profile of the proteinopathy expresses is reflected in the altered model parameters [5]. Using simulation-based inference, we then tried to recover the parameters using synthetic data simulated along the AD progression trajectory, and assessed the shrinkage of the posterior distributions, and the precision of the point estimates.
Results
The model successfully reproduced the TEP patterns found in the empirical data. Along the progression trajectory, the model parameters remained identifiable, showing significant shrinkage of the posterior distribution with respect to the prior and small distance of the mean values from the ground-truth. Additionally, while we observed some correlation between the estimated parameters (hinting to a certain degree of degeneracy), it did not impact the performance of the inference.
Discussion
Here we demonstrate that the brain response to the noninvasive stimulation is informative enough to allow effective parameter inference in connectome-based models. The workflow can be easily adapted to different data-features derived from the TEPs, as well as different stimulation targets. As a natural next step, this approach will be benchmarked and validated in empirical datasets on individual subject data.




Acknowledgements
Jan Fousek receives funding from the European Union’s Horizon Europe research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101130827.
References
[1] https://doi.org/10.1002/trc2.12303
[2] https://doi.org/10.1088/2632-2153/ad6230
[3] https://doi.org/10.3389/fninf.2013.00010
[4] https://doi.org/10.1016/j.clinph.2024.09.007
[5] https://doi.org/10.1523/ENEURO.0345-23.2023
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P090: Identifying Manifold Degeneracy and Estimating Confidence for Parameters of Compartmental Neuron Models with Hodgkin-Huxley Type Conductances
Sunday July 6, 2025 17:20 - 19:20 CEST
P090 Identifying Manifold Degeneracy and Estimating Confidence for Parameters of Compartmental Neuron Models with Hodgkin-Huxley Type Conductances

Saina Namazifard1, Anwar Khaddaj2, Matthias Heinkenschloss2,Fabrizio Gabbiani*1

1Department of Neuroscience, Baylor College of Medicine, Houston, USA
2Department of Computation Applied Mathematics & Operations Research, Rice University, Houston, USA
*Email:gabbiani@bcm.edu

Introduction

Much work has been devoted to fitting the biophysical properties of neurons in compartmental models with Hodgkin-Huxley type conductances. Yet, little is known on how reliable model parameters are, and their possible degeneracy. For example, when characterizing a membrane conductance through voltage-clamp (VC) experiments, one would like to know if the data will constrain the parameters and how reliable their estimates are. Similarly, when studying the responses of a neuron with multiple conductances in current clamp (CC), how robust is the model to changes in peak conductances. Such degeneracy is linked to biological robustness [1] and is key in understanding the constraints posed by conductance distributions on dendritic computation [2].

Methods
A one-compartment model with Hodgkin-Huxley (HH) type conductances was used. We studied synthetic and experimental VC data of the H-type conductance (gH) that is widely expressed in neuronal dendrites. We also studied the original HH model in VC and CC. Finally, we considered a stomatogastric ganglion (STG) neuron model in CC. The ordinary differential equation solutions, parameters, and their sensitivities were simultaneously estimated using collocation methods and automatic differentiation. This allowed to solve the non-linear least squares (NLLS) problem associated with each model. Parameter degeneracy manifold iterative tracing was performed based on the singular value decomposition (SVD) of the NLLS residual Jacobian.
Results &Discussion
We identified parameter degeneracy using an SVD-based subset selection algorithm [3] applied to the objective function Jacobian. In the gH model in VC, the 2 least identifiable parameters were the leak conductance (gL) and gHreversal potentials, ELand EH. EHwas constrained by tail current experiments. This left a 1-dimensional (1-D) non-linear solution manifold for the remaining 7 parameters: gL, EH, and peak gHat 5 VC values. In the HH model in VC, 3 parameters were least identifiable: EK, gNaand EL. The HH model in CC exhibited approximate parameter degeneracy with a 1-D solution manifold. Similar results were obtained for the STG model. The role of ELin degeneracy was unexpected. Our results generalize to multi-compartment models.




Acknowledgements
Supported by NIH grant R01 NS130917.

References
1. Marom, S., & Marder, E. (2023). A biophysical perspective on the resilience of neuronal excitability across timescales.Nature Reviews Neuroscience,24, 640–652.https://doi.org/10.1038/s41583-023-00730-9
2. Dewell, R. B., Zhu Y., Eisenbrandt M., Morse R., & Gabbiani F. (2022). Contrast polarity-specific mapping improves efficience of neuronal computation for collision detection. Elife, 11:e79772.https://doi.org/10.7554/eLife.79772
3. Golub, G. H., & Van Loan C. F. (2013). Matrix Computations (4thed.). John Hopkins University Press.https://epubs.siam.org/doi/book/10.1137/1.9781421407944
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P091: Dynamic causal modelling (DCM) for effective connectivity in MEG High-Gamma-Activity: a data-driven Markov-Chain Monte Carlo approach
Sunday July 6, 2025 17:20 - 19:20 CEST
P091 Dynamic causal modelling (DCM) for effective connectivity in MEG High-Gamma-Activity: a data-driven Markov-Chain Monte Carlo approach
P. García-Rodríguez, M. Gilson, J.-D. Lemaréchal, A. Brovelli


CNRS UMR 7289 - Aix Marseille Université, Institut de Neurosciences de la Timone, Campus Santé Timone, Marseille, France


Email: pedro.garcia-rodriguez@univ-amu.fr

Introduction


Model inversion in DCM traditionally considers the application of Bayesian variational schemes,i.e.quadratic approximations in the vicinity of minima in the parameters space [1]. On the other hand, more general Markov Chain Monte Carlo methods (MCMC) opt for an intense use of random numbers to sampling posterior probability distributions. The successful application of either of them highly depend on the correct choice of prior distributions.

Methods

Here we propose an automated workflow combining MCMC with more conventional optimization Gradient-Descent (GD) techniques. Following the bi-linear model [2], a simpler DCM is considered with a matrixAforeffective connectivity and a matrixCfor sensory driving inputs.AlphaandGammafunctions for input profiles complete the modeling scenario.



Model’s parameters are estimated in three parts.Firstly, the matrixAis initialized from a Gaussian distribution with null-mean and variance given by observer-specific or group-level Granger Causality (GC) computed from the data. Next, GD algorithms implement a constraint bounded optimization to keep input parameters within plausible (positive) intervals. The adequacy of the parameters values found are further tested through a Levenberg-Marquard GD form. Finally, a MCMC Bayesian scheme incorporates the covariance of the observation noise in a Multivariate Gaussianlikelihood model. A Generative Model is so completed with parameter's prior distributions based on the GD optimizations mentioned above. Normal or Log-Normal distributions are alternatively used, the later to ensure positive values after sampling when needed.


Results



The approach is applied to High-Gamma Activity induced responses during visuomotor transformation tasks executed by 8 subjects, as reported in [3]. Methods were applied to hundred of trials for each subject, providing a handy data-driven DCM framework to evaluate the plausibility of various model configurations. Observation noise is empirically estimated from the pre-stimulus periods in original trials. Model inversion pipeline tends to support the most realistic model configuration tested, with an apparent relation between the estimated effective connectivityAandmatrixGC(Fig. 1).




Discussion
Comparison of prior and posterior distributions can help distinguish informative from non-informative
parameters.Initialization of matrixAwith structural connectivity instead was tested.





Figure 1. A DCM for high-gamma-activity (HGA). First column: brain regions and model configurations tested (top) and corresponding Granger-Causality (GC) matrix (bottom). Second column: model predictions compared to experimental HGA profiles (top) and relation between GC and estimated effective connectivty matrix A (bottom).
Acknowledgements
A.B. and P.G-R were supported by EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project).
References


[1] Zeidman P, Friston K, Parr T. (2023) A primer on Variational Laplace (VL). Neuroimage 279:120310. doi: 10.1016/j.neuroimage.2023.120310.[2] Chen CC, Kiebel SJ, Friston KJ (2008). Dynamic causal modelling of induced responses. Neuroimage 41(4):1293-1312. DOI: 10.1016/j.neuroimage.2008.03.026. PMID: 18485744.


[3] Brovelli A., Chicharro D., Badier J-M., Wang H., Jirsa V. (2015). Characterization of Cortical Networks and Corticocortical Functional Connectivity Mediating Arbitrary Visuomotor Mapping. J. Neuroscience 35(37):12643-12658. doi: 10.1523/JNEUROSCI.4892-14.2015.

Speakers
avatar for Matthieu Gilson

Matthieu Gilson

chair of junior professor, Aix-Marseille University
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P092: Reversal of Wave Direction in Unidirectionally Coupled Oscillator Chains
Sunday July 6, 2025 17:20 - 19:20 CEST
P092 Reversal of Wave Direction in Unidirectionally Coupled Oscillator Chains

Richard Gast*1, Guy Elisha2, Sara A. Solla3,4 , Neelesh A. Patankar5

1Department of Neuroscience, The Scripps Research Institute, San Diego, US
2Brain Mind Institute, EPFL, Lausanne, Switzerland
3Department of Neuroscience, Northwestern University, Evanston, US
4Department of Physics and Astronomy, Northwestern University, Evanston, US
5Department of Mechanical Engineering, Northwestern University, Evanston, US

*Email: rgast@scripps.edu

Introduction:Chains of coupled oscillators have been used to model animal behavior such as crawling, swimming, and peristalsis [1]. In such chains, phase lags between adjacent oscillators yield a propagating wave, which can either be anterograde (from proximal to distal) or retrograde (from distal to proximal). Switches in the direction of wave propagation have been related to increased flexibility, but also to pathology in biological systems. In Drosophila larvae, for example, switches in wave propagation are required for crawling, which has been achieved in a coupled oscillator chain model by applying an extrinsic input to distinct ends of the chain [2].



Methods:In this work, we explore a different, novel mechanism for reversing the wave propagation direction in a chain of unidirectionally coupled limit cycle oscillators. Instead of requiring tuned coupling or precisely timed local inputs, changes in the global extrinsic drive to the chain of oscillators suffices to control the direction of wave propagation. To this end, we consider a chain of unidirectionally coupled Wilson-Cowan (WC) oscillators [3]. The system is driven bySEandSI, which are extrinsic inputs globally applied to all excitatory and inhibitory populations in the chain, respectively.


Results:Combining numerical simulations and bifurcation analysis, we show that waves can propagate in anterograde or retrograde directions in the unidirectional chain of WC oscillators, despite uniform coupling and extrinsic input strengths across the chain [4]. We find that the direction of propagation is controlled by a disparity between the intrinsic frequency of the proximal oscillator and that of the more distal oscillators in the chain (see figures in [4]). The transition between these two behaviors finds explanation in the proximity of the chain's operational regime to a homoclinic bifurcation point, where small changes in the input translate to strong shifts in the oscillation period.

Discussion:Lastly, we discuss wave propagation in the context of phase oscillator networks. We describe a direct relationship between the intrinsic frequency differences between the proximal and distal chain elements, and the phase shift parameter of a phase coupling function [4]. This way, we analytically extend our numerical results to a more general phase oscillator model. Our work emphasizes the functional role that the existence of a homoclinic bifurcation plays for activity propagation in neural systems. The ability of this mechanism to operate on time scales as fast as the neural activity itself suggests that it could dynamically emerge in a variety of biological systems.





Acknowledgements
This work was funded by the by the National Institutes of Health (NIDDK Grant
No. DK079902 and No. DK117824), and National Science Foundation (OAC Grant No. 1931372).
References
[1] Kopell, N., & Ermentrout, G. B. (2003).The Handbook of Brain Theory and Neural Networks.

[2] Gjorgjieva, J., Berni, J., Evers, J. F., & Eglen, S. J. (2013).Frontiers in Computational Neuroscience, 7, 24.

[3] Wilson, H. R., & Cowan, J. D. (1972).Biophysical journal, 12(1), 1-24.
[4] Elisha, G., Gast, R., Halder, S., Solla, S. A., Kahrilas, P. J., Pandolfino, J. E., & Patankar, N. A. (2025).Physical Review Letters, 134(5), 058401.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P093: A method for generalizing validated single neuron models to arbitrary dendritic tree morphologies
Sunday July 6, 2025 17:20 - 19:20 CEST
P093 A method for generalizing validated single neuron models to arbitrary dendritic tree morphologies

Naining Ge, Linus Manubens-Gil*, Hanchuan Peng*

Institute for Brain and Intelligence, Southeast University, Nanjing, China

* Email: linus.ma.gi@gmail.com

* Email: h@braintell.org
Introduction

Single neuron models are vital for probing neuronal excitability, yet their electrophysiological properties remain tightly coupled to individual morphologies as in databases like the Allen Cell Types [1], hindering structure-function studies. Current frameworks, such as evolutionary algorithms linking morphology to electrical parameters [2] and compartment-specific adaptations based on input resistance [3], lack scalability, raising questions about robustness when applied to the variability observed in thousands of neurons.


Methods
We introduced a method to adjust single neuron models using morphological features and to validate their generalizability. We tested whether adjusting membrane conductance proportionally to dendritic surface ratios in thousands of single neuron morphologies enables robust generalization of electrophysiological features across morphologies. We validated generalization via two simulation phases: each (1) Allen-fitted model and (2) generalized model adapted to the remaining same-species morphologies. We compared electrophysiological features from Allen-fitted models, simulations (1) and (2) against experimental data. We used an MLP to further refine parameters using morphological features.

Results
Total dendritic surface area emerged as a decisive morphological feature that correlates with various experimentally measured electrophysiological features (e.g., rheobase, frequency-intensity slope). Generalization using the method proposed by Arnaudon et al. [3] led to artifactual firing properties in a large subset of the tested morphologies. When we generalized models normalizing total dendritic passive conductance, models showed responses within experimental ranges, demonstrating good biological fidelity. MLP-based prediction reached 15% mean absolute error in the prediction of model parameter sets.

Discussion
Our results suggest a promising path towards generalization of validated single neuron models to arbitrary morphologies within a defined electrophysiological cell type. By adapting existing validated models to a broad range of single neuron morphologies, our method offers a framework for large-scale studies of structure-function relationships in neurons and establishes a foundation for optimization of multi-scale neural networks.





Acknowledgements
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 32350410413 awarded to LMG.
References
[1] https://doi.org/10.1038/s41467-017-02718-3
[2]https://doi.org/10.1016/j.patter.2023.100855

[3] https://doi.org/10.1016/j.isci.2023.108222
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P094: Two coupled networks with a tricritical phase boundary between awake, unconscious and dead states capture cortical spontaneous activity patterns
Sunday July 6, 2025 17:20 - 19:20 CEST
P094 Two coupled networks with a tricritical phase boundary between awake, unconscious and dead states capture cortical spontaneous activity patterns


Maryam Ghorbani1,2*, Negar Jalili Mallak3, Mayank R. Mehta4,5,6
1Department Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
2Rayan Center for Neuroscience and Behavior, Ferdowsi University of Mashhad, Mashhad, Iran
3School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ
4UCLA, W.M. Keck Center for Neurophysics, Department of Physics and Astronomy, Los Angeles
5UCLA, Department of Neurology, Los Angeles, CA, United States of America
6UCLA, Department of Electrical and Computer Engineering, Los Angeles
*Email: maryamgh@um.ac.ir



Introduction
A major goal in systems neuroscience is to develop biophysical yet minimal theories that can explain diverse aspects of in vivo data accurately to reveal the underlying mechanisms. Under a variety of conditions, cortical activity shows spontaneous Up- and Down-state (UDS) fluctuations (1, 2). They are synchronous across vast neural ensembles, yet quite noisy, with highly variable amplitudes and durations (3). Here we tested the hypothesis that this complex pattern can be captured by just two weakly coupled, noiseless, excitatory-inhibitory (E-I) networks.
Methods
The model consisted of two mean-field E-I networks, with recurrent, long-range excitatory connections. The LFP and single unit responses were measured from various parts of the parietal and frontal cortices of 8 naturally resting rats using tetrodes. Parietal cortical LFP in anesthetized mice was measured from 116 animals from the deeper parts of the neocortex. The animals were anesthetized using urethane only once during this recording session.
Results
The model could reproduce recently observed periodic versus highly variable UDS in strongly versus weakly coupled organoids respectively. The same model could quantitatively capture the differential patterns of UDS in vivo during anesthesia and natural NREM sleep. Further, by varying just two free parameters, the strength of adaptation and of recurrent connection between the two networks, we made 18 quantitative predictions about the complex properties of UDS. These not only matched experimental data in vivo, but could reproduce and explain the systematic differences across electrodes and animals.
Discussion
The model revealed that, the cortex remains close to the awake-UDS phase boundary in all the sleep sessions but near awake-UDS-dead tricritical phase boundary during anesthesia. Thus, just two weakly coupled mean-field networks, with only two biophysical parameters, can accurately capture cortical spontaneous activity patterns under a variety of conditions. This has several applications, from understanding stimulus response variability, to anesthesia and cortical state transitions between awake, asleep and unconscious.





Acknowledgements
None
References

1.https://doi.org/10.1523/JNEUROSCI.13-08-03252.1993.
2.https://doi.org/10.1523/JNEUROSCI.19-11-04595.1999.
3.https://doi.org/10.1523/JNEUROSCI.0279-06.2006.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P095: Network Complexity For Whole-Brain Dynamics Estimated From fMRI Data
Sunday July 6, 2025 17:20 - 19:20 CEST
P095 Network Complexity For Whole-Brain Dynamics Estimated From fMRI Data

Matthieu Gilson*1, Gorka Zamora-López2

1Faculty of Medecine, Aix-Marseille University, Marseille, France
2Center for Brain and Cognition, University Pompeu Fabra, Barcelona, Spain

*Email: matthieu.gilson@univ-amu.fr

Introduction


The study of complex networks has shown a fast growth in the past decades. In particular, the study of the brain as a network has benefited from the increasing availability of datasets, such as magnetic resonance imaging (MRI). This has generated invaluable insights about cognition with subjects performing tasks in the scanner, as well as alterations thereof to gain a better understanding of neuropathologies.

Methods
Here we review our recent work on the estimation of effective connectivity (EC) at the whole-brain level [1]. In a nutshell, a network model can be optimized to reproduce and characterize the subject- and task-specific dynamics. This EC is further constrained by the anatomy (via the network topology), yielding a signature of the brain dynamics. Instead of using directly EC as a biomarker, we have recently switched to a network-oriented analysis based on the estimated model, after fitting to data.


Results

In a recent application, we showed how our model-based approach uncovers differences between subjects with disorders of consciousness, from coma (UWS) to minimally conscious (MCI) and controls (awake) [2]. We find that the discrimination across patient types (and controls) can be quantitatively related to measuring whether the modeled stimulation response affects the whole network of brain regions. These results can further be interpreted in terms of over-segregation for UWS just after the stimulation, but more importantly a lack of integration in the sense of propagation of the response to the whole network late after the stimulation. In other words, we obtain personalized and interpretable biomarkers based on the brain dynamics.




Discussion

This framework can be used to quantify network complexity based on in-silico stimulation of a network model whose dynamics are estimated from ongoing data (i.e. without experimental stimulation). We will also discuss this approach with recent work based on statistical physics of out-of-equilibrium dynamic systems (related to time reversibility) that can also be interpreted in terms of network complexity [3].





Acknowledgements
MG received support from the French government under the France 2030 investment plan, under the agreement Chaire de Professeur Junior (ANR-22-CPJ2-0020-01) and as part of the Initiative d’Excellence d’Aix-Marseille Université – A*MIDEX (AMX-22-CPJ-01).

References

[1] https://doi.org/10.1162/netn_a_00117
[2] https://doi.org/10.1002%2Fhbm.26386
[3] https://doi.org/10.1103/PhysRevE.107.024121p { line-height: 115%; margin-bottom: 0.25cm; background: transparent }
p { line-height: 115%; margin-bottom: 0.25cm; background: transparent }
Speakers
avatar for Matthieu Gilson

Matthieu Gilson

chair of junior professor, Aix-Marseille University
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P096: Coding and information processing with firing threshold adaptation near criticality in E-I networks
Sunday July 6, 2025 17:20 - 19:20 CEST
P096 Coding and information processing with firing threshold adaptation near criticality in E-I networks

Mauricio Girardi-Schappo*1, Leonard Maler2, André Longtin3


1Departamento de Física, Universidade Federal de Santa Catarina, Florianopolis SC 88040-900, Brazil

2Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa ON K1H 8M5, Canada

3Department of Physics, University of Ottawa, Ottawa,ON K1N 6N5, Canada


*Email: girardi.s@gmail.com


Introduction
The brain can encode information in output firing rates of neuronal populations or spike patterns. Weak inputs have limited impact on output rates, which challenges rate coding as a sole explanatory mechanism for sensory processing. Spike patterns contribute to perception and memory via sparse, combinatorial codes, enhancing memory capacity and information transmission [1, 2]. Here, we compare these two forms of coding in a neural network with and without threshold adaptation of excitatory neurons, including or excluding inhibitory neurons. This extends our previous study and assess the impact of inhibition on coding properties of adaptive networks.
Methods
We model a recurrent excitatory network incorporating an inhibitory population of neurons, which, in line with biological evidence, acts as a stochastic process independent of immediate excitatory spikes [3-5]. Networks with and without threshold adaptation are compared using measures of pattern coding, rate coding, and mutual information [6]. We examine whether threshold adaptation maintains its functional advantages when weakly coupled inhibitory inputs are introduced. The results are analyzed in the light of self-organized (quasi-)criticality [7], and a new theory for near-critical information processing is proposed.
Results
In the limit of weak inhibition, threshold adaptation maintains its ability to enhance coding of weak inputs via firing rate variance. Adaptive networks facilitate a smooth transition from pattern to rate coding, optimizing both coding strategies. This dynamic is lost in non-adaptive networks, which require stronger inputs for pattern coding. Constant-threshold networks rely on supercritical states for pattern coding, whereas adaptation allows robust coding through a near-critical dynamics. The threshold recovery timescale of 100ms to 1000ms is found to favor the pattern coding of weak inputs, matching experimental observation in dentate gyrus neurons [5]. However, the dynamic range of adaptive networks matches the subcritical regime of constant-threshold networks, contrary to what would be expected by the theory of self-organized criticality alone.
Discussion

Threshold adaptation is a biologically relevant mechanism that enhances weak stimulus processing by pattern coding, while keeping the capacity to perform rate coding of strong inputs. The optimal recovery timescale aligns with observations in the hippocampus and other brain regions. Adaptation improves information transmission, feature selectivity, and neural synchrony [8], supporting its role in sensory discrimination and memory tasks. Our findings reinforce the idea that weakly coupled inhibition does not disrupt threshold adaptation’s advantages, suggesting it is a robust coding mechanism across diverse neural circuits.



Acknowledgements
The authors thank financial support through NSERC grants BCPIR/493076-2017 and RGPIN/06204-2014 and the University of Ottawa’s Research Chair in Neurophysics under Grant No. 123917. M.G.-S. thanks financial support from Fundacao de Amparo a Pesquisa e Inovacao do Estado de Santa Catarina (FAPESC), Edital 21/2024 (grant n. 2024TR002507).
References
1.https://doi.org/10.1016/j.conb.2004.07.007
2.https://doi.org/10.1523/JNEUROSCI.3773-10.2011
3.https://doi.org/10.1038/s41583-019-0260-z
4.https://doi.org/10.1152/jn.00811.2015
5.https://doi.org/10.1101/2022.03.07.483263
6.https://doi.org/10.1007/s10827-007-0033-y
7.https://doi.org/10.1016/j.chaos.2022.111877
8.https://doi.org/10.1523/JNEUROSCI.4906-04.2005
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P097: Towards Optimized tACS Protocols: Combining Experimental Data and Spiking Neural Networks with STDP
Sunday July 6, 2025 17:20 - 19:20 CEST
P097 Towards Optimized tACS Protocols: Combining Experimental Data and Spiking Neural Networks with STDP

Camille Godin*1, Jean-Philippe Thivierge1,2

1School of Psychology, University of Ottawa, Ottawa, Canada.
2Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada


*Email:cgodi104@uottawa.ca
Introduction:Abnormal neuronal synchrony is linked to various pathological conditions, often manifesting as either excessive [1] or reduced oscillatory activity [2]. Thus, modulating brain oscillations through transcranial electric stimulation (TES) could help restore healthy activity. However, TES outcomes remain inconsistent [3], emphasizing the need for deeper understanding of its interaction with neural dynamics. Transcranial alternating current stimulation (tACS) – a form of oscillatory TES - allows for diverse waveforms, yet sinusoidal stimulation remains the predominant choice in both experimental and clinical settings. Optimizing simulation parameters could improve efficacy and reduce variability in outcomes, making TES a more reliable tool.

Methods:We modeled a Spiking Neural Network (SNN) of 1000 excitatory-inhibitory Izhikevich neurons with sparse, recurrent connectivity. We first aimed to replicate neural patterns observed in experimental data [4], where local field potential (LFP) signals were recorded from area V4 of amacaque monkey receiving sinusoidal tACS at 5, 10, 20 and 40 Hz, and SHAM. We tuned the model to match the SHAM condition(Fig 1. A), characterized by noisy delta oscillations, and then introduced external inputs to mimic experimental protocols. Next, we implemented Spike-Timing-Dependant Plasticity (STDP) on excitatory connections(Fig 1. B)and used the model to explore the effects of alternative stimulation waveforms and frequencies.
Results:We performed a series of simulations using a baseline model tuned to SHAM (~3 Hz). The 40 Hz stimulation produced the largest relative increase in power at its respective frequency compared to SHAM (Fig 1. A). Both square and negative sawtooth waves consistently outperformed sinusoidal stimulation in increasing delta-gamma broadband power(Fig 1. C).When tracking the evolution of outward excitatory synaptic connections, it appears that square waves near 10 Hz induce the strongest synaptic changes between pre- and post-simulation, relative to the other tested shapes(Fig 1. C). Notably, the STDP model captured the harmonics observed in experimental data more accurately than the non-plastic model.
Discussion:These findings highlight the relevance of Izhikevich-based SNNs with STDP for optimizing tACS protocols and improving their therapeutic potential. While sinusoidal waveforms remain the standard in tACS, our results suggest that square and negative sawtooth waves may be more effective at enhancing low-frequency synchronous activity in population oscillating within the delta-theta range. Additionally, square waves around 10 Hz induced stronger connectivity changes than other frequencies, aligning with experimental protocols to induce plasticity [5]. We argue that exploring diverse stimulation parameters is crucial to maximize the effectiveness of tACS for sustained network modifications and long-term effects on neural dynamics.



Figure 1. Fig 1. A) Left: SHAM condition in experiments and simulations. Right: Normalized relative power increase at four tACS frequencies. B) STDP integration in the SNN on excitatory connections, with weight distribution changes (black dot = centroid). C) Left: Changes in broadband power between baseline and inputs (no STDP). Right: Post-stimulation centroid relative to baseline, shifts across inputs.
Acknowledgements
C. C. Pack, P. Vieira and M. R. Krause.
References
1. https://doi.org/10.1016/j.clinph.2018.11.013
2. https://doi.org/10.2147/NDT.S425506
3. https://doi.org/10.1371/journal.pbio.3001973
4. https://doi.org/10.1073/pnas.1815958116

5. https://doi.org/10.3389/fncir.2023.1124221
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P098: DelGrad: Exact event-based gradients in spiking networks for training delays and weights
Sunday July 6, 2025 17:20 - 19:20 CEST
P098 DelGrad: Exact event-based gradients in spiking networks for training delays and weights

Julian Göltz*+1,2, Jimmy Weber*3, Laura Kriener*3,2,
Sebastian Billaudelle3,1, Peter Lake1, Johannes Schemmel1,
Melika Payvand$3, Mihai A. Petrovici$2

1Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
2Department of Physiology, University of Bern, Bern, Switzerland
3Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland

*Shared first authorship$shared senior authorship
+Email: julian.goeltz@kip.uni-heidelberg.de
Introduction
Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Incorporating trainable transmission delays, alongside synaptic weights, is crucial for shaping these temporal dynamics. While recent methods have shown the benefits of training delays and weights in terms of accuracy and memory efficiency, they rely on discrete time, approximate gradients, and full access to internal variables like membrane potentials [1]. This limits their precision, efficiency, and suitability for neuromorphic hardware due to increased memory requirements and I/O bandwidth demands.



Methods
To alleviate these issues, building on prior work on exact gradients in SNNs [2] we propose an analytical approach for calculating exact gradients of the loss with respect to both synaptic weights and delays in an event-based fashion. The inclusion of delays emerges naturally within our proposed formalism, enriching the model’s parameter search space with a temporal dimension (Fig. 1a). Our algorithm is purely based on the timing of individual spikes and does not require access to other variables such as membrane potentials.


Results
We investigate the impact of delays on accuracy and parameter efficiency both in ideal and hardware-aware simulations on the Yin-Yang classification task [3]. Furthermore, while previous work on learnable delays in SNNs has been mostly confined to software simulations, we demonstrate the functionality and benefits of our approach on the BrainScaleS-2 neuromorphic platform [4, Fig. 1b], successfully training on-chip-delays, and showing a good correspondence to our hardware-aware simulations (Fig. 1c,d).




Discussion
DelGrad presents an event-based framework for gradient-based co-training of weight and delay parameters, without any approximations. For the first time, we experimentally demonstrate the memory efficiency and accuracy benefits of adding delays to SNNs on noisy mixed-signal hardware. Additionally, these experiments also reveal the potential of delays for stabilizing networks against noise. DelGrad
opens a new way for training SNNs with delays on neuromorphic hardware, which results in fewer required parameters, higher accuracy and ease of hardware training.




Figure 1. a Information flow in an SNN, effect of weights w and delays d on the membrane potential of a neuron, and raster plot of the activity. b Photo of the neuromorphic chip BrainScaleS-2. c Comparison of networks without (blue) and with (orange) delays, showing the benefit of delays. d Our hardware-aware simulation can be used effectively as a proxy for hardware emulation, and confirms these benefits.
AcknowledgementsThis work was funded by the Manfred Stärk Foundation, the EC Horizon 2020 Framework Programme under grant agreement 945539 (HBP) and Horizon Europe grant agreement 101147319 (EBRAINS 2.0), the DFG under Germany’s Excellence Strategy EXC 2181/1-390900948 (STRUCTURES Excellence Cluster), SNSF Starting Grant Project UNITE (TMSGI2-211461), and the VolkswagenStiftung under grant number 9C840.
References
[1] I. Hammouamri, et al. doi: 10.48550/arXiv.2306.17670.
[2] J. Göltz, et al. doi: 10.1038/s42256-021-00388-x.
[3] L. Kriener, et al. doi: 10.1145/3517343.3517380.
[4] C. Pehle, et al. doi: 10.3389/fnins.2022.795876.


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P099: Critical Slowing Down and the Hierarchy of Neural Timescales: A Unified Framework
Sunday July 6, 2025 17:20 - 19:20 CEST
P099 Critical Slowing Down and the Hierarchy of Neural Timescales: A Unified Framework

Leonardo L. Gollo*1,2

1Brain Networks and Modelling Laboratory and The Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia.
2Instituto de Física Interdisciplinar y Sistemas Complejos, IFISC (UIB-CSIC), Palma de Mallorca, Spain

*Email: leonardo@ifisc.uib-csic.es

IntroductionResearch on brain criticality often focuses on identifying phase transitions, typically assuming that brain dynamics can be described by a single control parameter [1]. However, this approach overlooks the inherent heterogeneity of neural systems. At the neuronal level, diversity in excitability gives rise to multiple percolations and transitions, leading to complex dynamical behaviors [2]. At the macroscopic level, this heterogeneity enables the brain to operate across a broad hierarchy of timescales [3], ranging from rapid neural responses to external stimuli to slower cognitive processes [4,5]. A critical open question is how the framework of brain criticality, which emphasizes phase transitions, can be reconciled with the observed hierarchy of neural timescales.
MethodsWe employed a theoretical framework integrating nonlinear dynamics and criticality theory to analyze the relationship between hierarchical timescales and proximity to criticality. Specifically, we examined the role of critical slowing down, a phenomenon in which systems near a phase transition exhibit prolonged recovery times following perturbations. Using existing empirical findings on functional brain hierarchy and criticality [6,7,8], we evaluated how regions with slower timescales align with the principles of critical slowing down. Additionally, we explored how this framework supports a balance between sensitivity and stability in neural information processing [9].
ResultsOur analysis indicates that brain regions are not uniformly critical but instead positioned at varying distances from criticality. Regions with slower timescales tend to be situated closer to the critical point due to critical slowing down, while regions with faster dynamics operate in subcritical regimes. This spatiotemporal organization supports a structured coexistence of critical and subcritical dynamics, which enhances both sensitivity to external stimuli and reliable internal processing. Furthermore, this framework naturally gives rise to a hierarchy of timescales, and the coexistence of critical and subcritical dynamics enables a balance between flexibility and robustness, allowing neural systems to dynamically regulate information flow and cognitive processes [9].
Discussion
By integrating brain criticality and hierarchical timescales, our findings offer a novel perspective on neural dynamics. Instead of a uniform critical state, we propose that brain regions exist on a criticality continuum, shaped by their functional roles and temporal properties. This unified framework provides a nonlinear dynamics explanation for the brain’s timescale-based hierarchy, shedding light on its neurophysiological mechanisms. By bridging criticality and hierarchical organization, this work advances our understanding of the fundamental principles governing brain dynamics, offering a foundation for future investigations into neural computation and cognition.



Acknowledgements
We thank our colleagues and collaborators for their insightful discussions and feedback, which have enriched the development of this work. This work was supported by the Australian Research Council (ARC) Future Fellowship (FT200100942), the Ramón y Cajal Fellowship (RYC2022-035106-I), and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M/10.13039/501100011033.
References1.https://doi.org/10.1016/j.pneurobio.2017.07.002
2.https://doi.org/10.7717/peerj.1912
3.https://doi.org/10.1371/journal.pcbi.1000209
4.https://doi.org/10.1098/rstb.2014.0165
5.https://doi.org/10.1523/JNEUROSCI.1699-24.2024
6.https://doi.org/10.1073/pnas.2208998120
7.https://doi.org/10.1371/journal.pcbi.1010919
8.https://doi.org/10.1103/PhysRevX.14.031021
9.https://doi.org/10.1098/rsif.2017.0207
Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P100: Dynamic Range Revisited: Novel Methods for Accurate Characterization of Complex Response Functions
Sunday July 6, 2025 17:20 - 19:20 CEST
P100 Dynamic Range Revisited: Novel Methods for Accurate Characterization of Complex Response Functions

Jenna Richardson1, Filipe V. Torres2, Mauro Copelli2 ,Leonardo L. Gollo*1,3

1Brain Networks and Modelling Laboratory and The Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia.
2Departamento de Física, Centro de Ciência Exatas e da Natureza, Universidade Federal de Pernambuco, Recife, Brazil.
3Instituto de Física Interdisciplinar y Sistemas Complejos, IFISC (UIB-CSIC), Palma de Mallorca, Spain.

*Email: leonardo@ifisc.uib-csic.es

Introduction

The neuronal response function maps external stimuli to neural activity, with dynamic range quantifying the input levels producing distinguishable responses. Traditional sigmoidal response functions exhibit minimal firing rate changes at low and high inputs, with marked shifts at intermediate levels, making the 10%-90% response range a reliable dynamic range measure [1]. However, complex response functions [2-11]—such as double-sigmoid or multi-sigmoidal profiles with plateaus—challenge conventional calculations, often overestimating dynamic range. To address this, we propose a classification of response function complexity and introduce alternative dynamic range definitions for accurate quantification.
Methods
We analyzed a set of previously published empirical and computational studies featuring both simple and complex response functions. Additionally, we examined a neuronal model of a mouse retinal ganglion cell with a detailed dendritic structure, capable of generating both simple-sigmoid and complex response profiles. The model incorporated two dynamical elements that modulate energy consumption, either reducing or increasing neuronal activity, leading to the emergence of double-sigmoid response functions. To refine dynamic range estimation, we developed four alternative definitions that selectively consider only discernible response variations while excluding plateaus. These methods were evaluated by comparing their performance with the conventional definition across a range of response functions.
Results
Our findings confirm that the conventional 10%-90% dynamic range definition is effective for simple response functions but often inflates the estimated range for complex profiles due to the inclusion of plateau regions. In contrast, our proposed alternative definitions successfully differentiate meaningful response regions from indistinguishable input levels. Each method produced results that aligned with conventional calculations for simple response functions while offering a more precise generalization for complex cases. Moreover, the neuronal model demonstrated that specific modifications in dendritic dynamics can induce complex response profiles, reinforcing the necessity of improved measurement approaches.
Discussion
Our study reveals the limitations of traditional dynamic range definitions in capturing neuronal response diversity. The proposed classification and alternative calculations reduce arbitrary assumptions, enhancing accuracy across neuronal systems. These methods are generalizable beyond neuroscience, applicable to fields with complex, nonlinear dynamics. Freely available computational tools promote adoption and refinement. By improving dynamic range estimation,this work enhances our understanding of complex response functions.



Acknowledgements
This work was supported by the Australian Research Council (ARC) Future Fellowship (FT200100942), the Ramón y Cajal Fellowship (RYC2022-035106-I), and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M/10.13039/501100011033.
References
1. https://doi.org/10.1371/journal.pcbi.1000402
2. https://doi.org/10.1016/S0896-6273(02)01046-2
3. https://doi.org/10.1103/PhysRevE.85.011911
4. https://doi.org/10.1016/S0378-5955(02)00293-9
5. https://doi.org/10.1021/ja209850j
6. https://doi.org/10.1006/bbrc.1999.1375
7. https://doi.org/10.1103/PhysRevE.85.040902
8. https://doi.org/10.1038/srep03222
9. https://doi.org/10.1038/s41598-023-34454-8
10. https://doi.org/10.1073/pnas.0904784106
11. https://doi.org/10.1007/978-1-4419-0194-1_10
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P101: How random connectivity shapes fluctuations of finite-size neural populations
Sunday July 6, 2025 17:20 - 19:20 CEST
P101 How random connectivity shapes fluctuations of finite-size neural populations

Nils E. Greven*1, 2, Jonas Ranft3, Tilo Schwalger1,2

1Department of Mathematics, Technische Universität Berlin, Berlin, Germany
2Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
3Institut de Biologie de l’ENS, Ecole normale supérieure, PSL University, CNRS, Paris.

*Email: greven@math.tu-berlin.de
Introduction

A fundamental problem in computational neuroscience is to understand the variability of neural population dynamics and their response to stimuli in the brain [1,2]. Mean-field models have proven useful to study the mechanisms underlying neural variability in spiking neural networks, however, previous models that describe fluctuations typically assume either infinitely large network sizesN[3] or all-to-all connectivity [4] assumptions that seem unrealistic for cortical populations. To gain insight into the case of both finite network size and non-full connectivity together, we derive here a nonlinear stochastic mean-field model for a network of spiking Poisson neurons with quenched random connectivity.

Methods
We treat the quenched disorder of the connectivity by an annealed approximation [3] that leads to a simpler fully connected network with additional independent noise in the neurons. This annealed network enables a reduction to a low-dimensional closed system of coupled Langevin equations (MF2) for the mean and variance of the neuronal membrane potentials.We comparethe theory ofthis mesoscopic modelto simulations of the underlying microscopic model.An additional comparison toprevious mesoscopic models(MF1)that neglected the recurrent noise effect caused by quenched disorderallowsto investigateand analytically understand theeffects of taking quenched random connectivityand finite network-sizeinto account.

Results
In comparison, the novel mesoscopic model MF2 well describes the fluctuations and nonlinearities of finite-size neuronal populations and outperforms MF1. This effect can be analytically understood as a softening of the effective nonlinearityof the population transfer function (Fig 1A). The mesoscopic theory predicts a large effect of the connection probability (Fig 1B) and stimulus strength on the variance of the population firing rate (Fig 1C, D) that MF1 cannot sufficiently explain.

Discussion
In conclusion, our mesoscopic theory elucidates how disordered connectivity shapes nonlinear dynamics and fluctuations of neural populations at the mesoscopic scale and showcases a useful mean-field method to treat non-full connectivity in finite-size, spiking neural networks.In the paper presented here, we investigated the effect of quenched randomness on finite networks of Poisson neurons. As an extensionwe can analyzethe annealed approximation for networksof Integrate and-fire neuronswith reset.




Figure 1. A) Population transfer function F for MF2 (always blue) is flatter than for MF1 (always yellow) resulting in different fixed points =intersection with black B) MF2 captures dependence on connection probability p for the variance of the population firing rate r, MF1 is p-independent C,D) Variance of r for different external drive μ is massively different for MF1 vs MF2 and different network sizes
Acknowledgements
We are grateful to Jakob Stubenrauch for useful comments on the manuscript.
References[1] M. M. Churchland, et al., Nat. Neurosci. 13, 369 (2010).

[2] G. Hennequin, Y. Ahmadian, D. B. Rubin, M. Lengyel, K. D. Miller, Neuron 98, 846 (2018).

[3] N. Brunel, V. Hakim, Neural Comput. 11, 1621 (1999).

[4] T. Schwalger, M. Deger, W. Gerstner, PLoS Comput. Biol. 13, e1005507 (2017).
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P102: Spontaneous emergence of slow ramping prior to decision states in a brain-constrained model of fronto-temporal cortical areas
Sunday July 6, 2025 17:20 - 19:20 CEST
P102 Spontaneous emergence of slow ramping prior to decision states in a brain-constrained model of fronto-temporal cortical areas

Nick Griffin1*, Aaron Schurger2,3, Max Garagnani1,4*
1Department of Computing, Goldsmiths - University of London, London (UK)
2Department of Psychology, Crean College of Health and Behavioral Sciences, Chapman University, Orange, CA (USA)
3Institute for Interdisciplinary Brain and Behavioral Sciences, Chapman University, Orange, CA (USA)
4 Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin (Germany)
* Corresponding authors; emails: ngrif003@gold.ac.uk, M.Garagnani@gold.ac.uk

Introduction
Currently an ongoing debate exists over two prevailing interpretations of the pre-movement ramping neural signal known as the readiness potential (RP): the “early-” and “late-decision” accounts[1]. The former holds that the RP reflects planning and preparation for movement – a decision outcome. The latter holds that it is pre-decisional, emerging because a commitment is made only after activity reaches a threshold. Weused a fully brain-constrained neural-network model of six human frontotemporal areas to investigate this issue and the cortical mechanisms underlying the emergence of the RP and spontaneous decisions to act, extending the previous study that developed this neural architecture[2].
Methods
The network was trained via neurobiologically realistic learning mechanisms to induce formation of distributed perception-action cell assembly (CA) circuits. To replicate the experimental settings used to trigger the spontaneous emergence of volitional actions, we repeatedly reset its activity (trial start) and collected the resulting “WTs” (“wait time”: time steps elapsed between trial start and first spontaneous CA ignition) in absence of external stimulation, with neural activity driven only by uniform white noise. We then compared model and human data at both “behavioural” (WT distribution) and “neural activity” (RP index) level, where the simulated RP was defined simply as the total firing activity within the network’s model neurons.
Results
We found that, for select values of the parameters, the simulated WT distribution was statistically indistinguishable from the experimentally measured one. This result was replicated in eight out of ten repeated experiments, the variability being attributed to the noise inherently present in the network. We also found that the simulated RP, displaying the characteristic non-linear buildup, could be fitted to the experimental RP, with a mean square error that was minimal for the parameter set that produced the best-fitting simulated WT distribution. Finally, but importantly,individual trials also revealed sub-threshold fluctuations in CA activity insufficient by themselves for full ignition.
Discussion
We used a 6-area deep, brain-constrained model of frontotemporal cortical areas to simulate neural and behavioural indexes of the spontaneous emergence of simple, spontaneous decisions to act. The noise-driven spontaneous reverberation of activity within CA circuits and their subsequent ignition were taken as model correlates of theemergenceof “free” volitional action intentions and conscious decisions to move, respectively. Replicating both behavioural and brain indexes of spontaneous voluntary movements, the present computational architecture and simulation results offer a neuro-mechanistic explanation for the emergence of endogenous decisions to act in the human brain, providing further support for a late, stochastic account of the RP.



Acknowledgements
None.
References
1. Schurger, A., Hu, P. ‘Ben’, Pak, J., & Roskies, A. L. (2021). What Is the Readiness Potential?Trends in Cognitive Sciences,25(7), 558–570. https://doi.org/10.1016/j.tics.2021.04.001
2. Garagnani, M., & Pulvermüller, F. (2013). Neuronal correlates of decisions to speak and act: Spontaneous emergence and dynamic topographies in a computational model of frontal and temporal areas.Brain and Language,127(1), 75–85. https://doi.org/10.1016/j.bandl.2013.02.001
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P103: Developmental Changes in Circuit Dynamics during Hippocampal Memory Recall
Sunday July 6, 2025 17:20 - 19:20 CEST
P103 Developmental Changes in Circuit Dynamics during Hippocampal Memory Recall

Inês Guerreiro*1, Laurenz Muessig2, Thomas Wills2,Francesca Cacucci1

1Neuroscience, Physiology and Pharmacology, University College London, London, UK
2Cell and Developmental Biology, University College London, London, UK


*Email: ines.completo@gmail.com

Introduction

Replay of spike sequences during sharp wave ripples (SPW-Rs) in the hippocampus during sleep is believed to aid memory transfer from the hippocampus to the neocortex. The generation of SPW-Rs has been widely studied, but most studies focus on adult rats. Since hippocampal memory develops late in development [1,2,3], understanding the developmental changes in circuit dynamics during recall is key to uncovering how memory processing mechanisms mature over time.

Previous studies show that coordinated sequence replay emerges during development and that plasticity between co-firing cells has a higher threshold in pups than in adults [4].
Here, we investigate the mechanistic differences in replay in pups and adults.
Methods
We examined the development of hippocampal activity using LFP and single neuron recordings from the hippocampal CA1 area during post-run sleep in rats. Rats with ages ranging from postnatal days (P)17 to 6 months old were used in our analysis.
We first analysed differences in mean firing rates between interneurons and pyramidal cells during sleep in both pups and adults to assess developmental changes in activity patterns during replay.Next, we examined the firing patterns of identified interneurons during sharp wave ripples. By doing so, we can classify the interneuron subtypes recorded and examine their potential contributions to replay events.
Results
Preliminary results show that during post-run sleep, excitatory and inhibitory neurons in pups have higher firing rates than in adults.This contrasts with run trials, where the frequency of inhibitory neurons is lower in pups. Significant variability in interneuron spiking activity was also observed during both run and sleep, emphasizing the diversity of inhibitory interneurons in the CA1 region. Once the subclasses of interneurons and their behaviour duringSPW-Rsare identified, one can develop a canonical model to examine how the CA1 circuit in pups and adults modulates sequence replay during SWRs.


Discussion
Different types of interneurons participate in SPW-Rs and are recruited differently during replay events [5, 6].Given their essential role in SWR generation, replay, and memory processing, understanding how inhibitory neuron activity differs between pups and adults during run and sleep trials is crucial. These developmental differences in interneuron dynamics may influence memory consolidation processes. This work aims to reveal how the CA1 microcircuit regulates the replay of temporally ordered memory patterns throughout development and to clarify the distinct roles of various inhibitory interneuron types in this process.





Acknowledgements
We acknowledge funding from theWellcome Trust Senior Research Fellowship 220886/Z/20/Z (T.W), and the European Research Council Consolidator Award DEVMEM (FC).
References
1. doi: 10.1038/nn0717-1033a. PMID: 27428652; PMCID: PMC5003643.
2. doi: 10.1126/science.1188224. PMID: 20558720; PMCID: PMC3543985.
3. doi: 10.1126/science.1188210. PMID: 20558721
4. doi: 10.1016/j.cub.2019.01.005
5. doi: 10.1523/JNEUROSCI.19-01-00274.1999
6. doi: 10.1523/JNEUROSCI.3962-09.2010. PMID: 20427657; PMCID: PMC3763476
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P104: AnalySim new features: Jupyter notebook versioning and CSV browser
Sunday July 6, 2025 17:20 - 19:20 CEST
P104 AnalySim new features: Jupyter notebook versioning and CSV browser

Uday Biswas1, Anca Doloc-Mihu2,Cengiz Günay*2


1Computer Science & Engineering, B Tech, National Institute of Technology, Rourkela, India
2Dept. Information Technology, Georgia Gwinnett College, Lawrenceville, Georgia, USA


*cgunay@ggc.edu
Introduction

In this poster, we present the updates in the development of theAnalysimscience gateway for data sharing and analysis. An alpha testing version of the gateway is currently hosted athttps://analysim.tech, supported by the NSF-funded ACCESS advanced computing and data resource. TheAnalysimgateway is anopen sourcesoftware whose source code is hosted athttps://github.com/soft-eng-practicum/AnalySim.AnalySimaims to help with data sharing, data hosting for publications, interactive visualizations, collaborative research, and crowdsourced analysis. Special support is planned for datasets with many changing parameters and recorded measurements, such as those produced by neuronal parameter search studies withlarge numberof simulations. However,AnalySimis not limited to this type of data and allows running custom analysis code in interactive notebooks.Along withJavaScript notebooks provided from ObservableHQ.com, we recently added supportforJupyternotebooks using Pythonand theJupyterLitelibrary.
Methods &Results
AnalySimhas been a participant of the International Neuroinformatics Coordinating Facility (INCF) Google Summer of Code (GSoC) program since 2021. Participation inGSoC2024improved both the user interface and added major new functionality. Parts of the user interface was improved to have a more consistent visual style, and new pages and screens were added to support new functionality. In the backend, several changes were made: (1)to implementJupyternotebooks; (2)to move from Azure to ACCESS infrastructure; (3)tomovefrom using blob storage to PostgreSQL database; and (4)to enableversioningeachofmultiplenotebooksin one projectandtoselect a default notebook as the project description. We are currently looking for testers of the gateway andsolicitingfeedback of the design, current features, and the future vision. In this poster, we will review existing features and introduce new ones from the ongoing development as part ofGSoC2025.
Discussion
AnalySimis developed with the vision of offering features on an interactive web platform that improves visibility of one’s research and helps the paper review process by allowing to reproduce others’ analyses. In addition, it aims to foster collaborative research by providing access to others' public datasets and analysis, by creating opportunities to ask novel questions, to guide one's research, and to start new collaborations or to join existing teams.It aims to be a “social scientific environment”,where one can fork or clone existing projects to customize them, and tag or follow researchers and projects.In addition, one can filter datasets, duplicateanalysesand improve them, and then publish findings via interactive visualizations. In summary,AnalySimaims to be aGithub-like tool specialized for scientific problems - especially when datasets are large and complex as in parameter search.


















Acknowledgements
We thank INCF andGSoCfor supportingAnalysim.This work used Jetstream2 at Indiana University through allocation BIO220033 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.
References
N/A
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P105: Reciprocity-Controlled Recurrent Neural Networks: Why More Feedback Isn't Always Better
Sunday July 6, 2025 17:20 - 19:20 CEST
P105 Reciprocity-Controlled Recurrent Neural Networks: Why More Feedback Isn't Always Better

Fatemeh Hadaeghi1*, Kayson Fakhar1,2, Claus C. Hilgetag1,3



Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg (UKE), Hamburg University, Hamburg Center of Neuroscience, Germany.

MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.

Department of Health Sciences, Boston University, Boston, MA, USA.

*Email: f.hadaeghi@uke.de





Introduction
Cortical architectures are hierarchically organized and richly reciprocal; yet, their connections exhibit microstructural and functional asymmetries: forward connections are primarily driving, backward connections driving and modulatory, with both showing laminar specificity. Despite this reciprocity, theoretical and experimental studies highlight a systematic avoidance of strong directed loops — an organizational principle captured by the no-strong-loop hypothesis — especially in sensory systems [1]. While such an organization may primarily prevent runaway excitation and maintain stability, its role in neural computation remains unclear. Here, we show that reciprocity fundamentally limits the computational capacity of recurrent neural networks.
Methods
We recently introduced efficient Network Reciprocity Control (NRC) algorithms designed to steer asymmetry and reciprocity in binary and weighted networks while preserving key structural properties [2]. In this work, we apply these algorithms to modulate reciprocity in recurrent neural networks (RNNs) within the reservoir computing (RC) framework [3]. We explore both binary and weighted connectivity in the reservoir layer, spanning random and biologically inspired architectures, including modular and small-world networks. We assess the computational capacity of these models by evaluating memory capacity (MC) and the quality of their internal representations, as measured by the kernel rank (KR) metric [4].
Results
Our results show that increasing feedback — via reciprocity — degrades key computational properties of recurrent neural networks, including memory capacity and representation diversity. Across all experiments, increasing link reciprocity consistently reduced memory capacity and kernel quality, with particularly pronounced and linear declines in sparse networks. When weights, sampled from a log-normal distribution, were assigned to binary networks, stronger weights amplified these reciprocity-driven impairments. Furthermore, enforcing “strength” reciprocity (reciprocity in connection weights) caused an exponential degradation of memory and representation quality. These effects were robust across network sizes and connection densities.
Discussion

Our study explores how structural (link) and weighted (strength) reciprocity limit the computational capacity of recurrent neural networks, explaining the underrepresentation of strong reciprocal connections in cortical circuits. Across various network architectures, we show that increasing reciprocity reduces memory capacity and kernel rank, both of which are essential for complex dynamics and internal representations. This effect persists, and often worsens, for log-normal weight heterogeneities. While higher weight variability boosts performance, it does not mitigate reciprocity’s effects. Beyond neuroscience, our findings influence the initialization and training of artificial RNNs, and the design of neuromorphic architectures.



Acknowledgements

Funding of this work is gratefully acknowledged: F.H: DFG TRR169-A2, K.F: German Research Foundation (DFG)-SFB 936-178316478-A1; TRR169-A2; SPP 2041/GO 2888/2-2 and the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510. C.H: SFB 936-178316478-A1; TRR169-A2; SFB 1461/A4; SPP 1212 2041/HI 1286/7-1, the Human Brain Project, EU (SGA2, SGA3).
References


https://doi.org/10.1038/34584

https://doi.org/10.1101/2024.11.24.625064

https://doi.org/10.1126/science.1091277

https://doi.org/10.1016/j.neunet.2007.04.017


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P106: Computational and Experimental Insights into Hippocampal Slice Spiking under Extracellular Stimulation
Sunday July 6, 2025 17:20 - 19:20 CEST
P106 Computational and Experimental Insights into Hippocampal Slice Spiking under Extracellular Stimulation

Sarah Hamdi Cherif*1,2,3, Mathilde Wullen4, Steven Le Cam2, Valentine Bouet4, Jean-Marie Billard4, Jérémie Gaidamour3, Laure Buhry1 , Radu Ranta2

1Université de Lorraine, CNRS, LORIA, France
2Université de Lorraine, CNRS, CRAN, France
3Université de Lorraine, CNRS, IECL, France
4 Normandie Univ, UNICAEN, CHU Caen, INSERM, CYCERON, COMETE, France

*Email: sarah.hamdi-cherif@loria.fr

Introduction

Synaptic plasticity and neuronal excitability in the hippocampus (HC) are altered in schizophrenia [1]. Multi Electrode Array (MEA) recordings following a long-term potentiation (LTP) protocol revealed local field potential (LFP) variations along physiological pathways and high-frequency (HF) activity near the stimulation site [2]. To understand the effect of extracellular stimulation (ES) and explore its relationship with synaptic activity and spike generation, we combined electrophysiological recordings and computational modelling. We applied ES to hippocampal slices around Schaeffer’s collaterals while recording signals near CA3 pyramidal cell bodies, and we developed a computational model to aid interpretation.
Methods
Experiments: in-depth glass microelectrode recordings in CA3 of healthy HC slices. 40 pulses (0.4 ms duration, 7 s apart) at 0.2, 0.4 mA. Signals were processed and filtered above 300 Hz to isolate spiking activity.
Simulations: one multi-compartment CA3 pyramidal neuron model [3], with ES modelled using LFPy as a dipole [4], orthogonal to Schaffer collaterals. Background noise below spiking level was added to reflect environmental HC conditions. We varied the position of the stimulation along the axon and at different distances to explore spike variability. Synaptic inputs included excitatory (dendritic) and inhibitory (somatic) drive [5], generated by a variable-rate Poisson process, simulating activation of cells located closer to the ES.
Results
In the experimental data, each ES pulse triggered a single spike, issued from the same cell according to tentative spike sorting [6]. Lower pulse intensity led to variable latencies. As intensity increased, spike timing arose earlier and became more synchronized(Fig.1.a).

According to our simulations (Fig.1.b), a cell, activated directly by ES, showed spike latencies of 0.25–4 ms that were used to parametrize the Poisson process. The target cell, excited both by ES and synaptic inputs, exhibited later and more dispersed latencies, of 3-8 ms, closer to experimental data, suggesting they capture further-layer activity. Higher intensity had the same effects as experimental data (Fig.1.c).
Discussion
Our findings suggest that the HF activity observed in the MEA recordings results from spiking activity propagating antidromically within CA3, activating recurrent excitatory networks. We plan more recordings to confirm our findings and use the model to include the reproduction of LFP dynamics, focusing on synaptic activity. Also, to work on simulating populations while reducing the cell-models to point-neurons and implementing a network parametrized using the observed spike latencies to approximate its dynamics. Ultimately, we aim to develop a comprehensive computational model of HC electrical activity and synaptic plasticity, in healthy and schizophrenia mouse models [7], to better understand the mechanisms involved.



Figure 1. Figure 1 - (a) Experimental trials at 200µA (left) and 400µA (right), (b) Simulated membrane potentials at 100µA (left) and 250µA (right). Both trials and simulations last 50 ms, with stimulation starting at 5 ms. (c) Boxplots comparing spike latency variability across low/high intensities in both experiment (left) and simulation (right).
Acknowledgements
//
References
[1]https://doi.org/10.3390/ijms22052644
[2]https://doi.org/10.12751/nncn.bc2024.244
[3]https://doi.org/10.1523/JNEUROSCI.1889-24.2025
[4]https://doi.org/10.1007/978-1-61779-170-3_8
[5]https://doi.org/0.3389/fncel.2013.00262.
[6]https://doi.org/10.1088/1741-2552/acc210
[7]https://doi.org/10.1016/j.schres.2020.11.043
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P107: Towards A New Metric for Understanding Neural Population Encoding in Early Vision
Sunday July 6, 2025 17:20 - 19:20 CEST
P107 Towards A New Metric for Understanding Neural Population Encoding in Early Vision

Silviya Hasana*1, Simo Vanni1

1Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland

*Email: silviya.hasana@helsinki.fi

Introduction
Representational fidelityofvisioncan be evaluatedusing adecoding approach[1-3],howeverthe methodisinexplicableand difficult to quantify.This study aims to develop a quantitative metric for vision models based on neural population spike activity.By analyzing the relationship between stimulus features, spike timing, and receptive field locations, we investigate how population spike data encode information.We apply both deterministic and probabilistic approaches to evaluate neural populationsencoding,andsubsequentlyquantitativedecodingcapacityonspatialstimulus information.Aquantitativeperformancemetricis fundamental for advancing functional computational vision models, such as the SMART system[4].



Methods
We useda macaque retina model with simulatedON and OFFparasolunitsin a 2D retinal patch.Stimuli were varied for spatial parameters, such asspatial frequency and orientation.First, we summed the spikes generated within 500msof grating stimulus onset for each unit separately.Then, wecalculated the difference between ON- and OFF-unit spike counts and normalized the responses.Toevaluatewhether activation patterns aligned with thetruestimulus, we binned the responses and applied a deterministic Gabor filteratdifferent orientations.Subsequently, we plan to evaluatemodel performanceusing a Bayesian Ideal Observer, which models prior, likelihood, and posterior as a tuning curve foroptimalstimulus decoding.


Results
Our findings showed the presence of orientation-specific patterns in neural population activity, both in the deterministic and probabilistic approaches. Based on preliminary data analysis and processing in our deterministic approach, we expected a strong match between Gabor kernel prediction and true orientations for parasol ON and OFF cells. Our experimentstested100 sweeps, obtained 100% accuracy for oblique orientation prediction with mean average errorof2.97degrees. The high accuracy of the deterministic approach confirms that simple feature-based encoding mechanisms, such as Gaborfiltermatching, align well with neural responses in the parasol ON and OFF cells.


Discussion
As expected, the modeled retinal ganglion cell population encodes orientation in a structured manner that can be decoded based on receptive field positions in the visual field.Moving forward, we will explorea probabilistic approach by applying curve tuning through a Bayesian Ideal Observer to assessthe reliability of neuron population spike activityencodes stimulus orientation, spatial frequency, and motion direction.The probabilistic approach will incorporate prior and likelihood to reconstruct stimulus orientation. The resultswillassesshowboth deterministic andprobabilisticapproaches complement, andcontributetoneuraldecoding, providing a quantitative metric for evaluating functional vision models.




AcknowledgementsThis work has been supported by Academy of Finland grant No: 361816
References
[1]https://doi.org/10.1371/journal.pcbi.1006897
[2]https://doi.org/10.1038/s41583-021-00502-3
[3]https://doi.org/10.1038/nrn2578
[4]https://doi.org/10.1016/j.brainres.2008.04.024

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P109: From point neurons to biophysically detailed networks: A data-driven framework for multi-scale modeling of brain circuits
Sunday July 6, 2025 17:20 - 19:20 CEST
P109 From point neurons to biophysically detailed networks: A data-driven framework for multi-scale modeling of brain circuits

Beatriz Herrera*1, Xiao-Ping Liu2, Shinya Ito1, Darrell Haufler1Kael Dai1, Brian Kalmbach2, Anton Arkhipov1
1Allen Institute, Seattle WA, 98109, USA
2Allen Institute for Brain Science, Seattle WA, 98109, USA


*Email: beatriz.herrera@alleninstitute.org
Introduction
Patch-seq technique links transcriptomics, neuron morphology and electrophysiology in individual neurons. Recent work at the Allen Institute resulted in a comprehensive mouse and human neuron database with Patch-seq data for diverse cell types. In this study, we carry out a large-scale optimization of generalized-leaky integrate-and-fire (GLIF) models on these Patch-seq data for thousands of cells. Furthermore, anticipating applications of models of diverse cell types for network simulations at multiple levels of resolution, we propose a strategy to convert point-neuron network models into detailed biophysical models.Methods
GLIF modelswere obtained from the Patch-seq electrophysiology recordings, and we quantified the optimization performance for different types of current injection stimulus, as well as comparing with an earlier approach[1].
GLIF biophysical conversion strategyinvolved (1) mapping each GLIF neuron to a biophysical neuron model; (2) replacing GLIF network parameters with corresponding biophysical parameters[2]; (3) estimating conversion factors to translate current-based synaptic weights to conductance-based for each source-target pair; (4) constructing the biophysical network model using the scaling factors from (3).Results
We find that optimizing GLIF models using long-square step current stimuli generalizes better to noise stimuli (than vice versa). With this approach, we obtained GLIF models for a total of 6,460 cells from diverse types of both mouse and human glutamatergic and GABAergic interneurons[3–6].
We tested our GLIF-to-biophysical network conversion on our V1 point-neuron model[2]. We simulated responses to pre-synaptic populations and calculated synaptic weight factors for matching GLIF firing rates. We built the V1 biophysical model, fine-tuning weights to align with recordings and validating againstin vivoNeuropixels data.Discussion
Our work establishes the foundation for more comprehensive simulations of brain networks. We shed light on the relationships between genes and morpho-electrophysiological features by developing models for various cell types with available transcriptomic data from Patch-Seq experiments. Furthermore, our method for transforming point-neuron network models into detailed biophysical models will aid in developing and optimizing such complex models, as point-neuron networks are less computationally intensive and simpler to optimize for reproducing experimental data.



Acknowledgements
We thank the founder of the Allen Institute, Paul G. Allen, for his vision, encouragement, and support. This work was supported by the National Institutes of Health (NIH) under the following award nos.: NIBIB R01EB029813, NINDS R01NS122742 and U24NS124001, and NIMH U01MH130907. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
References
1. Teeter C, Iyer R, Menon V, et al. Nature Communications 2018; 9:
2. Billeh YN, Cai B, Gratiy SL, et al. Neuron 2020; 106:388-403.e18
3. Berg J, Sorensen SA, Ting JT, et al. Nature 2021; 598:151–158
4. Gouwens NW, Sorensen SA, Baftizadeh F, et al. Cell 2020; 183:935-953.e19
5. Chartrand T, Dalley R, Close J, et al. Science 2023; 382:eadf0805
6. Lee BR, Dalley R, Miller JA, et al. Science 2023; 382:eadf6484
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P110: What is membrane electrical excitability?
Sunday July 6, 2025 17:20 - 19:20 CEST
P110 What is membrane electrical excitability?

Lionardo Truqui2, Hector Chaparro-Reza1, Vania Victoria Villegas1, Marco Arieli Herrera-Valdez1*

1Dinamics, Biophysics, and Systems Physiology Laboratory, Universidad Nacional Autónoma de México.
2Graduate Program in Mathematics, Universidad Nacional Autónoma de México


*Email: marcoh@ciencias.unam.mx



Neuronal excitability is a phenomenon that is understood in different ways. A neuron may be regarded as more excitable than another if it responds to a stimulus with more action potentials within a fixed period of time. Another way to think about how excitable a neuron is could be to consider the delay with which it starts to respond to a given stimulus. We use the simplest, 2-dimensional, biophysical model of neuronal membrane potential based on two transmembrane currents carried by sodium and potassium ions similar to the Morris-Lecar model[4], but without leak current [1], to study conditions that should be satisfied by an excitable system, and provide a formal definition of electrical excitability. The model consist only on two currents, a Na and a K current, as small currents are not necessary to generate action potentials [3]. We first establish the notion that a model based on autonomous evolution rules is associated with a family of dynamical systems. For instance, if the parameter representing the input current in the equation for the membrane potential is varied todescribe experimental data in current-clamp experiments, the family is defined at least by the input current, and its members can be associated to different sets of trajectories in phase space. We then proceed to analyse the properties of single dynamical systems by examination of their underlying vector fields. In a similar way as originally proposed by Fitz-Hugh [2], we define a region from which all trajectories are action potentials, and call it the Excitability Region. We also propose a measure to quantify the extent to which a single dynamical system is excitable, and then proceed to compare different degrees of excitability. Since the membrane potential of a neuron is represented by a family of dynamical systems, we then examine which of those systems are excitable under the above definition, and assess which ones are more excitable, as a function of the input current. While doing so, we explore the bifurcation structure of the model taking the input current as the bifurcation parameter, and characterize the changes in excitability induced by varying the sizes of the population of ion channels. Having done so, we define neuronal excitability by extending our definition for a single dynamical system to the whole family in the model. We discuss how our measure of excitability behaves around attractor nodes and attractor foci, and also use our definitions to describe the I-F relations of types I, II, and III, that have been used previously to characterize excitability.



Acknowledgements
Universidad Nacional Autónoma de México
References
[1] Av-Ron, E., Parnas, H., and Segel, L. A. (1991). A minimal biophysical model for an excitable and oscillatory neuron. Biological Cybernetics, 65(6):487–500.
[2] FitzHugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysical journal, 1(6):445–466.
[3] Herrera-Valdez, M. A. and Lega, J. (2011). Reduced models for the pacemaker dynamics of cardiac cells. Journal of Theoretical Biology, 270(1):164–176.
[4] Morris, C. and Lecar, H. (1981). Voltage oscillations in the barnacle giant muscle fiber. Biophysical Journal, 35:193–213.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P111: A model of 3D eye movements reflecting spatial orientation estimation
Sunday July 6, 2025 17:20 - 19:20 CEST
P111 A model of 3D eye movements reflecting spatial orientation estimation

Yusuke Shinji1,Yutaka Hirata*2,3,4

1. Dept. Computer Science, Chubu Univ. Graduate School of Engineering, Kasugai, Japan
2. Dept. AI and Robotics, Chubu Univ. College of Science and Engineering, Kasugai, Japan
3. Center for mathematical science and artificial intelligence, Chubu Univ., Kasugai, Japan
4. Chubu University Academy of Emerging Sciences, Chubu Univ., Kasugai, Japan

*Email: yutaka@isc.chubu.ac.jp



Introduction
Spatial orientation (SO) refers to the estimated self-motion state, formed by integrating multiple sensory inputs. Accurate SO formation is crucial for animals to navigate safely through their environment. However, errors in SO estimation can occur, leading to spatial disorientation (SDO).A typical example of SDO is the somatogravic illusion, in which a forward linear acceleration is erroneously perceived as an upward tilt. The vestibulo-ocular reflex (VOR) is driven by the estimated head motion, generating counter-rotations of the eyes to stabilize vision. Thus, the VOR is a reflection of SO, particularly of head motion states. Currently, we developed a 3D VOR model to elucidate the neural algorithms underlying SO formation.

Methods
The model was configured within the Kalman filter (KF) framework, which estimates hidden physical states from noisy sensor data. In this framework, active 3D head motion is the input to the sensors while passive head motion and 3D visual motion are treated as process noise. These motions are detected by the otolith, semicircular canals, and retina whose outputs are transmitted to the brain. The KF incorporates corresponding sensor models that generate sensory predictions, which are compared with actual sensory outputs. The resulting sensory prediction errors are used to update the estimated head motion state through the KF algorithm.The VOR eye velocity is then produced in the opposite direction to the 3D head motion estimate.

Results
To evaluate the model, we first simulated the somatogravic illusion in goldfish that we recently discovered [1]. The model successfully reproduced the goldfish 3D VOR reflecting somatogravic illusion.Specifically, in the KF, lateral linear head acceleration was misestimated as head roll tilt, resulting in a vertical VOR in goldfish, while head roll tilt motion was correctly estimated as roll tilt.Next, we simulated two representative human vestibular illusions: Off-vertical axis rotation at a constant angular velocity, and Post-rotatory tilt after earth-vertical axis rotation. In both cases, the model misestimated linear head acceleration, reproducing known perceptual errors in humans.

Discussion
These results suggest that our 3D VOR KF model effectively captures the neural computational mechanisms underlying SO formation from noisy sensory signals.Previous studies have demonstrated the cerebellar nodulus and uvula play a critical role in SO formation, specifically in distinguishing head tilt against gravity from head linear translational acceleration [2].As a next step we investigate how the well-characterized cerebellar neuronal circuitry and its synaptic learning rules implement the KF algorithm, utilizing our artificial cerebellum [3]. Understanding this relationship will provide insights into how the brain optimally estimates self-motion and resolves sensory ambiguities.





Acknowledgements
Supported by JST CREST (Grant Number: JPMJCR22P5) and JSPS KAKENHI (Grant Number: 24H02338)


References
1. Tadokoro, S., Shinji, Y., Yamanaka, T., Hirata, Y. (2024). Learning capabilities to resolve tilt-translation ambiguity in goldfish.Front Neurol, 15:1304496. https://10.3389/fneur.2024.1304496
2. Laurens, J. (2022). The otolith vermis: A systems neuroscience theory of the Nodulus and Uvula.Front Neurosci, 16:886284. https://10.3389/fnsys.2022.886284
3. Shinji, Y., Okuno, T., Hirata, Y. (2024). Artificial cerebellum on FPGA: realistic real-time cerebellar spiking neural network model capable of real-world adaptive motor control.Front Neurosci, 18:1220908. https://10.3389/fnins.2024.1220908

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P112: Dopamine effects on firing patterns of in vitro hippocampal neurons
Sunday July 6, 2025 17:20 - 19:20 CEST
P112 Dopamine effects on firing patterns of in vitro hippocampal neurons

Huu Hoang*1, Aurelio Cortese2

1Neural Information Analysis Laboratories, ATR Institute International, Kyoto, Japan
2Computational Neuroscience Laboratories, ATR Institute International, Kyoto, Japan

*Email: hoang@atr.jp

Introduction:Dopamine plays a pivotal role in shaping hippocampal neural activity, yet its impact on network dynamics is not fully understood. We explored this by recording rat embryonic hippocampal neurons in vitro, using electrophysiological techniques, pharmacological manipulations, and spike train analysis. Our findings reveal that dopamine reduces network synchrony, broadening the range of burst dynamics—an effect absent with dopamine antagonists. This study deepens our insight into how dopamine signaling shapes functional hippocampal networks.

Methods:We cultured eight rat embryonic hippocampus samples and used MaxOne microelectrode arrays to record spiking activity from hundreds of electrodes. Baseline spikes were captured without dopamine, then dopamine was added gradually, and spikes were recorded. We assessed synchrony strength via spike coherence in 1 msec bins and examined dopamine’s effect on spike dynamics. Spike bursts (typically in 200-300 ms) were detected, and their similarity index was measured. Using affinity propagation on the similarity index, we identified repeating burst motifs, revealing insights into burst dynamics. We used linear mixed-effect models to statistically evaluate the influence of dopamine on the metrics of interest.

Results:Our study revealed that dopamine lowers synchrony strength, enhances network modularity, and restricts connectivity within modules, while broadening burst pattern variety. At higher dopamine concentrations (300-1000 μM), burst frequency rose, yet burst similarity dropped, with repeating motifs surging 40-50% above baseline. The reduction in synchrony caused by dopamine directly lessened burst pattern similarity, shown by a robust positive correlation between synchrony and similarity changes in eight samples. This relationship disappeared in samples treated with dopamine antagonists, underscoring dopamine’s critical influence on reorganizing network dynamics and its possible role in cognitive processes.

Discussion:We investigated dopamine’s impact on cultured hippocampal neurons using high-density electrode arrays, observing a rise in burst events with pronounced synchrony across hundreds of electrodes after incrementally adding dopamine, consistent with previous studies. This setup provided detailed network-level insights with cellular and millisecond precision, showing dopamine reduced spike synchrony while increasing the number of network modules with more restricted connectivity. Such reorganization may optimize information flow for cognitive functions like memory and decision-making. Dopamine also diversified burst patterns, boosting repeating motifs and lowering burst similarity—an effect blocked by antagonists. These findings suggest dopamine enhances distinct encoding in hippocampal circuits, offering potential implications for understanding cognition and schizophrenia therapies.



Acknowledgements
This study was supported by JST ERATO (JPMJER1801, "Brain-AI hybrid").

References
Hoang H, Matsumoto N, Miyano M, Ikegaya Y, Cortese A. (2025). Dopamine-induced relaxation of spike synchrony diversifies burst patterns in cultured hippocampal networks.Neural Networks; 181:106888. doi: 10.1016/j.neunet.2024.106888.

Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P113: Baseline firing rate of dopaminergic neurons modulates the dopamine response to stimuli
Sunday July 6, 2025 17:20 - 19:20 CEST
P113 Baseline firing rate of dopaminergic neurons modulates the dopamine response to stimuli

M. Duc Hoang*1, Andrea R. Hamilton2,Timothy J. Lewis1, Stephen L. Cowen3,4and M. Leandro Heien2


1Department of Mathematics, University of California, Davis, Davis, CA, USA
2Department of Chemistry & Biochemistry, University of Arizona, Tucson, AZ, USA
3Department of Psychology, University of Arizona, Tucson, AZ, USA
4Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA

*Email: mdhoang@ucdavis.edu

Introduction

Electrical brain stimulation (EBS) targeting dopaminergic (DA) neurons is a valuable tool for investigating dopamine circuits and treating diseases such as Parkinson's disease [1]. However, our understanding of how the temporal structure of stimuli interacts with the firing dynamics of DA neurons to regulate dopamine release remains limited. In this study, we experimentally measure changes in dopamine concentration in response to stimulation of the medial forebrain bundle and develop a data-driven mathematical model to describe this stimulus-evoked dopamine response. Our results demonstrate that the baseline firing rate (BFR) of DA neurons prior to electrical stimulation can strongly modulate the DA response.

Methods
In this study, we use fast-scan cyclic voltammetry (FSCV) to measure changes in dopamine (DA) concentration in the nucleus accumbens (NAc) in response to stimulation of the medial forebrain bundle (MFB) in anesthetized rats [2]. We then implement a modification of the Montague et al. model [3] of DA response to electrical stimulation of DA axons in the MFB. The model includes synaptic facilitation and depression, as well as feedback inhibition through D2 autoreceptors (D2AR). We fit model parameters to the FSCV data from multiple stimulation patterns simultaneously. Importantly, we account for the unknown baseline DA levels in our parameter fits. We also validate the model with additional experimental data sets.
Results
We observe a high degree of variability in the dopamine response in the NAc when the MFB is subjected to identical 20 Hz stimuli across several trials. Specifically, the peak change in DA concentration differs by ~40% between trials (Fig. 1A). Simulations of our model show a similarly large variation in peak DA concentration in response to 20 Hz stimulation when the baseline firing rate (BFR) of simulated DA neurons is varied from 0 to 6 Hz, even though the corresponding variation in baseline DA concentration is below 0.02M (Fig. 1B). We use phase-plane analysis to elucidate the mechanism underlying this phenomenon and describe how the phenomenon is influenced by BFR, dopamine reuptake, and D2AR inhibition.
Discussion
Our experimental and modeling results suggest that small fluctuations in baseline DA concentrations in the NAc due to changes in BFR of DA neurons, D2AR levels, or DA reuptake rates can significantly alter the DA response to MFB stimulation. Analysis of the model reveals that the underlying mechanism of this phenomenon involves the interplay of the firing rate of DA neurons, DA reuptake dynamics, and synaptic depression. These findings underscore the importance of BFR in modulating dopamine release during EBS, suggesting that BFR may influence the efficacy of EBS in treating disorders such as Parkinson’s disease, depression, and schizophrenia. This insight could inform the optimization of EBS protocols for therapeutic applications.



Figure 1. The baseline firing rate (BFR) leads to substantial variation in DA response to identical stimulation. A: The change in DA in the NAc in response to a 20 Hz periodic stimulation applied to the MFB. 3 trials are shown of the same stimulus. B: DA concentration profiles predicted by the modified Montague et al. model in response to 20 Hz stimulation for 5 different BFRs (0-6 Hz).
Acknowledgements
Funding for this project is provided by the National Institutes of Health (R01 NS123424-01). A.R.H. was funded by T32 GM008804.
References
[1] https://doi.org/10.3171/2019.4.JNS181761
[2] https://doi.org/10.1021/acschemneuro.4c00115
[3] https://doi.org/10.1523/JNEUROSCI.4279-03.2004
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

17:20 CEST

P141: Intrinsic neuronal properties shape local circuit inhibition in primate prefrontal cortex
Sunday July 6, 2025 17:20 - 19:20 CEST
P141 Intrinsic neuronal properties shape local circuit inhibition in primate prefrontal cortex

Nils A. Koch*1, Benjamin W. Corrigan2,3,4, Julio C. Martinez-Trujillo3,4,5, Anmar Khadra1,6

1Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
2Department of Biology, York University, Toronto, ON, Canada
3Department of Clinical Neurological Sciences, London Health Sciences Centre, Western University, London, ON, Canada
4Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
5Western Institute for Neuroscience, Western University, London, ON, Canada
6Department of Physiology, McGill University, Montreal, QC, Canada

*Email: nils.koch@mail.mcgill.ca
Introduction

Intrinsic neuronal properties play a key role in neuronal circuit dynamics. One such property evident during step-current stimulation is intrinsic spike frequency adaptation (I-SFA), a feature noted to be important for in vivo activity [1] and computational capabilities of neurons [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. In behaving animals, extracellular recordings exhibit extrinsic spike frequency adaptation (E-SFA) in response to sustained visual stimulation. However, the relationship between the I-SFA measured in vitro, typically in response to constant step-current pulses, and the E-SFA described in vivo during behavioral tasks, in which the inputs into a neuron are likely variable and difficult to measure, is not well characterized.
Methods
To investigate how I-SFA in neurons isolated from brain networks contributes to E-SFA during behavior, we recorded responses of macaque lateral prefrontal cortex neurons in vivo during a visually guided saccade task and in acute brain slices in vitro. Units recorded in vivo and neurons recorded in vitro were classified as broad spiking (BS) putative pyramidal cells and narrow spiking (NS) putative inhibitory interneurons based on spike width. To elucidate how in vitro I-SFA contributes to in vivo E-SFA, we bridge the gap between the in vitro and in vitro recordings with a data-driven hybrid circuit model in which NS neurons fit to the in vitro firing behavior are driven by local BS input.
Results
Both BS and NS units exhibited E-SFA in vivo. In acute brain slices, both cell types displayed differing magnitudes of I-SFA but with timescales similar to E-SFA. The model NS cell responses show longer SFA than observed in vivo. However, introduction of inhibition of NS cells to the model circuit removed this discrepancy and reproduced the in vivo E-SFA, suggesting a vital role of local circuitry in dictating task-related in vivo activity. By exploring the relationship between individual neuron I-SFA and hybrid circuit model E-SFA, the contribution of I-SFA to E-SFA is uncovered. Specifically, this contribution is dependent on the timescale of I-SFA and modulates in vivo response magnitudes as well as E-SFA timescales.
Discussion
Our results indicate that both I-SFA and inhibitory circuit dynamics contribute to E-SFA in LPFC neurons during a visual task and highlight the contribution of both single neurons and network dependent computations to neural activity underlying behavior. Furthermore, the interaction between excitatory input and I-SFA demonstrates that inhibitory cortical neurons do not solely contribute to the local circuit inhibition by altering the sign of signals (i.e. from excitation to inhibition) and that the intrinsic properties of NS neurons contribute to their activity in vivo. Consequently, large models of cortical networks as well as artificial neuronal nets that emphasize network connectivity may benefit from including intrinsic neuronal properties.



Acknowledgements
This work was supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant to A.K.; Canadian Institutes of Health Research (CIHR), NSERC, Neuronex (ref. FL6GV84CKN57) and BrainsCAN grants to J.C.M.-T.; and an NSERC Postgraduate Scholarship-Doctoral Fellowship to N.A.K..
References
1. https://doi.org/10.3934/mbe.2016002
2. https://doi.org/10.1016/j.cub.2020.11.054
3. https://doi.org/10.1007/s10827-007-0044-8
4. https://doi.org/10.1523/JNEUROSCI.4795-04.2005
5. https://doi.org/10.1038/s41467-017-02453-9
6. https://doi.org/10.1523/ENEURO.0305-18.2020
7. https://doi.org/10.1016/j.conb.2013.11.012
8. https://doi.org/10.1016/j.biosystems.2022.104802
9. https://doi.org/10.1007/s00422-009-0304-y
10. https://doi.org/10.1523/JNEUROSCI.1792-08.2008
11. https://doi.org/10.1016/j.neuron.2016.09.046
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti
 
Monday, July 7
 

08:30 CEST

Registration
Monday July 7, 2025 08:30 - 19:00 CEST
Monday July 7, 2025 08:30 - 19:00 CEST

09:00 CEST

Announcements and Keynote #3: Ken Miller
Monday July 7, 2025 09:00 - 10:10 CEST
Speakers
Monday July 7, 2025 09:00 - 10:10 CEST
Auditorium - Plenary Room

10:10 CEST

Coffee break
Monday July 7, 2025 10:10 - 10:40 CEST
Monday July 7, 2025 10:10 - 10:40 CEST

10:40 CEST

Oral session 3: Perturbing the brain
Monday July 7, 2025 10:40 - 12:30 CEST
Monday July 7, 2025 10:40 - 12:30 CEST
Auditorium - Plenary Room

10:41 CEST

FO3: Single-cell optogenetic perturbations reveal stimulus-dependent network interactions
Monday July 7, 2025 10:41 - 11:10 CEST
Single-cell optogenetic perturbations reveal stimulus-dependent network interactions

Deyue Kong*1, Joe Barreto2,Greg Bond2, Matthias Kaschube1, Benjamin Scholl2

1Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
2University of Colorado Anschutz Medical Campus, Department of Physiology and Biophysics, Aurora, Colorado, USA

*Email: kong@fias.uni-frankfurt.de


Introduction
Cortical computations arise through neuronal interactions and their dynamic reconfiguration in response to changing sensory contexts. Cortical interactions are proposed to engage distinct operational regimes that either amplify or suppress particular neuronal networks. A recent study in mouse primary visual cortex (V1) found competitive, suppressive interactions between nearby, similarly-tuned neurons, with exception of highly-correlated neuronal pairs showing facilitatory coupling [1]. It remains unclear whether such feature competition generalizes to cortical circuits with topographic organization, where neighboring neurons within columns exhibit similar tuning to visual features, and distal excitatory axons preferentially target similarly-tuned columns.
Methods
We investigated interactions between excitatory neurons in the ferret V1 and how network interactions depend on stimulus strength (contrast). We recorded the responses of layer 2/3 neurons to drifting gratings of eight directions at two contrast levels using 2-photon calcium imaging, while activating individual excitatory neurons with precise 2-photon optogenetics. We statistically quantified the effect of target photostimulation on neural activity (inferred spike rate) during visual stimulation using a Poisson generalized linear model (GLM). We then used our model to estimate a target’s influence on the surrounding neurons’ activity and their stimulus coding properties.
Results
Our analyses revealed interactions that depended on cortical distance, stimulus properties, and functional similarity between neuron pairs. Influence of photostimulated neurons strongly depended on cortical distance, but overall exhibited net suppression. Suppression was weakest between nearby neurons (<100µm), but was found across large cortical distances. Distant-dependent suppression was reduced when visual stimuli were low contrast. Examining functional-similar neurons, we found that noise correlations between neuron pairs were most predictive of measured interactions, showing a strong shift from amplification to competition: at low contrast, we observed local amplification between noise-correlated excitatory neurons, but increasing contrast led to a predominantly suppressive influence across all distances.
Discussion
Our data support predictions from theoretical models, such as stabilized supralinear networks (SSN), in which networks amplify weak feed-forward input, but sublinearly integrate strong inputs [2,3]. Furthermore, decoding analyses suggest that the contrast-dependent shift from facilitation to suppression correlates with improved decoding accuracy of direction. These findings demonstrate that stimulus contrast dynamically modulates recurrent interactions between excitatory neurons in ferret V1, likely by differentially engaging inhibitory neurons. Such dynamic modulation supports optimal encoding of sensory information within columnar cortices.




Acknowledgements

References
[1] Chettih, SN, Harvey, CD. Single-neuron perturbations reveal feature-specific competition in V1. Nature (2019).doi:10.1038/s41586-019-0997-6
[2] Rubin DB, Van Hooser SD, Miller KD. The stabilized supralinear network: a unifying circuit motif underlying multi-input integration in sensory cortex. Neuron(2015) doi: 10.1016/j.neuron.2014.12.026. PMID: 25611511; PMCID: PMC4344127.
[3] Heeger DJ, Zemlianova KO. A recurrent circuit implements normalization, simulating the dynamics of V1 activity. PNAS(2020). doi: 10.1073/pnas.2005417117. . PMID: 32843341; PMCID: PMC7486719.
Speakers
Monday July 7, 2025 10:41 - 11:10 CEST
Auditorium - Plenary Room

11:10 CEST

O9: Predicting neural responses to intra- and extracranial electric brain stimulation by means of the reciprocity theorem
Monday July 7, 2025 11:10 - 11:30 CEST
Predicting neural responses to intra- and extracranial electric brain stimulation by means of the reciprocity theorem

Torbjørn V. Ness*¹, Christof Koch²,Gaute T. Einevoll¹,³
¹ Department of Physics, Norwegian University of Life Sciences, Ås, Norway
² Allen Institute, Seattle, WA, USA
³ Department of Physics, University of Oslo, Oslo, Norway

*Email: gaute.einevoll@nmbu.no



Introduction
Neural activity can be modulated through electric stimulation (ES), which is extensively used in both science and the clinic, including deep brain stimulation and temporal interference stimulation. While ES is grounded in well-established biophysics, it has proven difficult to gain a solid understanding of ES and its sensitivity to features like location, orientation, different cell types, and the ES frequency-content. This represents a major obstacle to the applications of ES.
Here, we show that the reciprocity theorem (RT) can be applied more broadly than previously recognized [1], offering a whole new perspective on ES which reproduces known features, explains surprising observations, and makes new predictions.



Methods
The effect of ES on different biophysically detailed cell models is simulated with NEURON [2] and LFPy [3]. The ES is treated as a current point source which sets up an extracellular potential that is used as a boundary condition at each cellular compartment. The somatic membrane potential-responseVmis calculated. In the RT-based approach the current is inserted intracellularly in the soma, and the resulting extracellular potentialVecalculated. According to the RT the two approaches should give identical results for passive cell models (Vm=Ve, Fig. 1). For transcranial electric stimulation (tES), we used a detailed head model to estimate membrane potential responses to tES deep in the brain.
Results
In all tested cases the RT-based approach to simulating ES introduces zero error for passive cell models, and below a few percent error for subthreshold active cell models [1].
By leveraging the RT, we show that the effect of ES has a 1/rdecay for nearby neurons and 1/r² for distant neurons. Furthermore, for nearby neurons the ES response is approximately cell-type and frequency-independent, while for distant neurons (e.g., tES), pyramidal neurons are most strongly targeted at low frequencies, and interneurons at high frequencies, but with a less synchronous effect [1]. Finally, tES at conventional safety limits (<4 mA) induces subthreshold potential changes of ~40-200 µV, far below the threshold for direct neuronal firing [1].


Discussion
By applying RT, we provide a framework for understanding neural responses to ES, by leveraging our good understanding of extracellular potentials [4]. Our results indicate that conventional tES primarily affects neural activity via subtle subthreshold effects, suggesting indirect network-level mechanisms such as synchronization or stochastic resonance. The weak frequency dependence of subthreshold responses explains recent experimental findings [5], reinforcing RT as a powerful tool for modeling ES. Future work should incorporate network-level dynamics to assess the broader implications of these findings for neuromodulation and brain stimulation therapies.




Figure 1. Reciprocity theorem in the context of electrical brain stimulation: The somatic membrane potential response to an extracellular current injection at position r (panel A) corresponds to the extracellular potential at location r from the same current source injected into the soma .
Acknowledgements
T.V.N. and G.T.E. received funding from the European Union Horizon 2020 Research and Innovation Programme under Grant Agreement No. 101147319 [EBRAINS 2.0]. C.K. thanks the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.
References
[1] Ness et al. (2025) bioRxivhttps://doi.org/10.1101/2024.08.04.603691
[2] The NEURON bookhttps://doi.org/10.1017/CBO9780511541612
[3] Hagen et al. (2018)https://doi.org/10.3389/fninf.2018.00092
[4] Halnes et al. (2024)https://doi.org/10.1017/9781009039826
[5] Lee et al. (2024) Neuronhttps://doi.org/10.1016/j.neuron.2024.05.009
Monday July 7, 2025 11:10 - 11:30 CEST
Auditorium - Plenary Room

11:30 CEST

O10: Time precise closed-loop protocols for non-invasive infrared laser neural stimulation
Monday July 7, 2025 11:30 - 11:50 CEST
Time precise closed-loop protocols for non-invasive infrared laser neural stimulation

Alicia Garrido-Peña¹,Pablo Sanchez-Martin¹, Irene Elices¹, Rafael. Levi¹, Francisco B. Rodriguez¹, Javier Castilla², Jesus Tornero², Pablo Varona¹
1. Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politecnica Superior, Universidad Autónoma de Madrid, Madrid, Spain
2. Center for Clinical Neuroscience, Hospital los Madroños, Brunete, Spain
*Email: alicia.garrido@uam.es

Introduction

In the context of the increasing interest in noninvasive neural stimulation techniques, infrared (IR) laser has shown its potential to achieve an effective and spatially localized stimulation. In our recent publication [1], we demonstrated that it is possible to modulate neural dynamics with a Continuous-Wave (CW) Near IR laser in terms of firing rate and spike waveform. We analyzed the biophysical cause of this effect in a computational model, observing a combined alteration of ionic channels and significant action of temperature change. We also assessed the illumination effect at different stages of the action potential generation with a closed-loop protocol. Here we extend these results and stimulation protocols.

Methods

We used a CW-IR laser focused with a micromanipulator in the ganglia of Lymnaea stagnalis and Carcius maenas, while recording membrane potential intracellularly. For the closed-loop protocols, we employed the RTXI software [2], designing algorithms in modules for spike, and burst prediction, target-driven stimulation and neuronal digital-twin experiments built with conductance-based models. The laser was triggered based on the ongoing activity with a precise electro-optical shutter (range of µs). Our software is open-source and available atgithub.com/GNB-UAM.

Results

We show the effectiveness of laser stimulation with high spatial resolution (by the nature of this technique) and temporal precision (by our real-time closed-loop protocols and an electro-optical shutter) on two different neural systems. For both cases, sustained CW laser illumination accelerates neural activity. We also report on the efficacy of activity-dependent illumination with a time-precise shutter. We extend our spike prediction algorithms to neural bursting activity and a target-driven stimulation. To leverage this closed-loop neurotechnology, we present a biohybrid circuit where a model neuron acts as a digital twin of the recorded cell. This enables real-time automatic decision-making based on the model tested response.

Discussion

CW-IR laser illumination is a novel noninvasive neurotechnology with high temporal and spatial resolution. CW-IR laser effectively modulated neural activity in two different neural systems with distinct closed-loop protocols. These protocols employ a library for time-precise stimulation adaptable to different neural dynamics (e.g. tonic spiking or bursting activity) and different target goals (a specific firing rate, burst duration...). The digital-twin model also enables online adjustment by the combined modification of the stimulation parameters and the simulated neuron. Exploiting the advantages of closed-loop stimulation and real-time tools, expands the possibilities of this neurotechnology to novel research and clinical applications.
AcknowledgementsWork funded by PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00 and PID2021-122347NB-I00.
References[1] Garrido-Peña, A., Sanchez-Martin, P., Reyes-Sanchez, M., Levi, R., Rodriguez, F. B., Castilla, J., Tornero, J., & Varona, P. (2024). Modulation of neuronal dynamics by sustained and activity-dependent continuous-wave near-infrared laser stimulation. Neurophotonics, 11(2), 024308.doi.org/10.1117/1.NPh.11.2.024308

[2] Patel, Y. A., George, A., Dorval, A. D., White, J. A., Christini, D. J., & Butera, R. J. (2017). Hard real-time closed-loop electrophysiology with the Real-Time eXperiment Interface (RTXI). PLOS Computational Biology, 13(5), e1005430.doi.org/10.1371/journal.pcbi.1005430
Monday July 7, 2025 11:30 - 11:50 CEST
Auditorium - Plenary Room

11:50 CEST

O11: LTP-induced changes in spine geometry and actin dynamics on the timescale of the synaptic tag
Monday July 7, 2025 11:50 - 12:10 CEST
LTP-induced changes in spine geometry and actin dynamics on the timescale of the synaptic tag

Mitha Thomas*1, Cristian Alexandru Bogaciu2, Silvio Rizzoli2, Michael Fauth1

1Third Physics Institute, Georg-August University, Goettingen, Germany
2Department of Neuro- and Sensory Physiology, University Medical Center, Goettingen, Germany
*Email: mitha.thomas@phys.uni-goettingen.de

Introduction

Long-term potentiation of synapses can occur in two phases: an early phase which constitutes a transient increase in synaptic strength, and a late phase which sustains this increase for a longer duration. According to the synaptic tagging and capture hypothesis [1,2], a necessary condition for the late phase is the formation of a transient memory of the stimulation event - the ‘synaptic tag’ - which enables the synapse to capture newly synthesized proteins later on. What implements this transient memory on the timescale of hours remains elusive [2,3]. We follow the hypothesis that it is implemented by actin dynamics in interaction with spine geometry and test this using computational modelling and FRAP experiments.

Methods
Actin forms filaments in the spine which belong to distinct pools (dynamic and stable) with different turnover rates. To study the relation of actin pools with synaptic tagging, we derived a computational model of the interactions between the spine membrane and the actin pools undergoing plasticity. Dynamic actin is modelled as a Markov chain that considers several processes related to actin binding proteins, e.g., branching, capping and severing, which are modulated upon LTP [4, 5]. Stable actin is modelled as a low-pass filter of the dynamic pool with filter coefficients following binding and unbinding of crosslinking proteins. The spine membrane deforms according to the balance between the actin-generated force and the forces resulting from the physical properties of the membrane (Fig 1A-C).
Results
We first test whether can support memory on a timescale of hours without stable actin. At the onset of LTP, there is a rapid increase in dynamic actin, which increases the outward-directed force and, consequently, also the spine volume. However, these changes only last as long as the actin dynamics is modulated. When we introduce the stable pool, it exhibits an overshoot that persists on the timescale of hours, and hence, the synaptic tag. As more stable actin significantly increases the actin-generated force, this also transfers to a long-lasting spine volume increases (Fig 1D-H). To validate these model predictions experimentally, we perform chemical LTP on hippocampal spines and use FRAP to assess stable actin content after LTP. Also here, stable actin shows a significant increase agreeing with the overshoot in the model (Fig 1H).
Discussion
Using a combination of experiments and simulations, we have demonstrated that the dynamics of the stable actin pool after LTP-inducing stimulation leads to a long lasting alteration of actin dynamics and spine geometry. These dynamics fulfil the fundamental criteria for the tag, that is synapse specificity, independence from protein synthesis, and decay within hours. Thus, we present evidence the biophysical implementation of the synaptic tag may be based on the complex interaction of actin with spine membrane.



Figure 1. Actin-spine membrane interactions. A-B: membrane deformation from imbalance between actin-generated force and membrane counter force. C: time evolution of actin dynamics with several associated processes/proteins. D: simulated spine at different time instants. t0: time of stimulation. E: Control and LTP spine volumes. F: amount of dynamic actin. G: amount of stable actin. H: stable actin fraction
Acknowledgements
This work was funded by the German Science Foundation under CRC1286 ”Quantitative Synaptology”, projects C03 and A03. We would like to thank Simon Dannenberg, Stefan Klumpp, Jannik Luboeinski, Francesco Negri, Christian Tetzlaff and Florentin Woergoetter for fruitful discussions on the project.
References
1.https://doi.org/10.1038/385533a0
2.https://doi.org/10.1038/nrn2963
3.https://doi.org/10.1002/iub.2261
4.https://doi.org/10.1016/j.neuron.2014.03.021
5.https://doi.org/10.3389/fnsyn.2020.00009
Speakers
Monday July 7, 2025 11:50 - 12:10 CEST
Auditorium - Plenary Room

12:10 CEST

O12: Identifying Dynamic-based Closed-loop Targets for Speech Processing Cochlear Implants
Monday July 7, 2025 12:10 - 12:30 CEST
Identifying Dynamic-based Closed-loop Targets for Speech Processing Cochlear Implants

Cynthia Steinhardt*1,Menoua Keshishian2, Kim Stachenfeld1,3, Larry Abbott1
1 Center for Theoretical Neuroscience, Zuckerman Brain Science Institute, Columbia University, New York, New York USA
2 Department of Electrical Engineering, Columbia University, New York, New York USA
3 DeepMind, Google, London, United Kingdom


*Email: cs4248@columbia.edu



Introduction
Since the development of the first cochlear implant (CI) in 1957, over one million people have used these devices to regain hearing. However, CIs have a number of deficits, such as low efficacy in noise, and these deficits remain poorly understood [1]. CI algorithm research has focused on optimizing single-neuron voltage-driven activations in the cochlea, based on low-level auditory modeling but little work has focused on capturing known features of hierarchical speech processing across the brain [2]. We create a model system to investigate how CI-encoded speech affects phoneme and word comprehension, uncovering a dynamics-based signature for potential closed-loop CI applications.
Methods
We trained a DeepSpeech2 [3] model to convert spectrograms to phonemes using CTC Loss. Speech inputs were sourced from the LibriSpeech dataset. Speech was processed via the AB Generic Toolbox [4] to generate electrodograms, creating CI-transformed inputs or directly given to the model to simulate natural hearing. The model, trained on natural spectrograms, was then tested on CI-transformed inputs. Behavioral experiments were performed and compared to human results. We analyzed phoneme processing dynamics, using a distance metric to determine convergence patterns and tested dynamic signatures for feedback control [5].
Results
Our model exhibited human-like increases in phoneme reaction time with CI-transformed inputs and noise; phoneme confusion and word errors mirrored human behavior, as well [5]. Analysis revealed a specific time window per layer where correct phoneme comprehension dynamics converged for all phonemes, with increasing delays deeper in the network. We create a representation distance metric, measured via a Wasserstein metric between dynamics during comprehension and found it correlated (up to 0.78) with behavioral confusion of the model while processing these phonemes in sentences. Using a linear closed-loop controller, we then successfully pushed dynamics toward correct phoneme perception using this converged representation at a target.
Discussion
This study presents a plausible model for speech perception with and without a CI, validated against human data. We identify a dynamic signature predicting comprehension or confusion within 100 ms—a feasible intervention window. We demonstrate its use for closed-loop feedback and find evidence of human EEG evoked responses with similar dynamics [6], suggesting a potential EEG-based CI parameter selection method. We show plausibility here for a new cochlear implant paradigm, instead of mimicking cochlear processing, we determine pulse parameters that drive desired population-level neural representations of speech. This approach may generalize to other neural implants, as we understand those systems better.



Acknowledgements
We thank the Simons Society of Fellows (965377), Gatsby Charitable Trust (GAT3708), Kavli Foundation, and NIH (R01NS110893) for support.
References
● Boisvert, I., et al. (2020). CI outcomes in adults.PLoS One,15(5), e0232421.
● Rødvik, A. K., et al. (2018). CI vowel/consonant ID.J Speech Lang Hear Res,61(4), 1023-1050.
● Amodei, D., et al. (2015). Deep Speech 2.arXiv:1512.02595.
● Jabeim, A. (2024). AB-Generic-Python-Toolbox.GitHub.
● Steinhardt, C. R., et al. (2024). DeepSpeech CI performance.arXiv:2407.20535.
● Finke, M., et al. (2017). Stimulus effects on CI users.Audiol Neurotol,21(5), 305-315.


Monday July 7, 2025 12:10 - 12:30 CEST
Auditorium - Plenary Room

12:30 CEST

Lunch break
Monday July 7, 2025 12:30 - 14:00 CEST
Monday July 7, 2025 12:30 - 14:00 CEST

12:30 CEST

OCNS Board meeting
Monday July 7, 2025 12:30 - 14:00 CEST
Monday July 7, 2025 12:30 - 14:00 CEST
TBA

14:00 CEST

Oral session 4: Modeling Disease
Monday July 7, 2025 14:00 - 15:50 CEST
Monday July 7, 2025 14:00 - 15:50 CEST
Auditorium - Plenary Room

14:01 CEST

FO4: Automated identification of disease mechanisms in hiPSC-derived neuronal networks using simulation-based inference
Monday July 7, 2025 14:01 - 14:30 CEST
Automated identification of disease mechanisms in hiPSC-derived neuronal networks using simulation-based inference

Nina Doorn*1, Michel van1,2, Monica Frega3

1Department of Clinical Neurophysiology, University of Twente, Enschede, The Netherlands

2Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente, The Netherlands

3Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genova, Italy


*Email: n.doorn-1@utwente.nl


Introduction
Human induced pluripotent stem cells (hiPSCs)-derived neuronal networks on multi-electrode arrays (MEAs) are a powerful tool to study neurological disordersin vitro[1]. The electric activity patterns of these networks differ between healthy and patient-derived neurons, reflecting underlying pathology (Fig. 1A). However, elucidating the underlying molecular mechanisms is challenging and requiresextensive, costly, and hypothesis-driven additional experiments.Biophysical models can link observable network activity to underlying molecular mechanisms by estimating model parameters that simulate the experimental observations. However, parameter estimation in such models is difficult due to stochasticity, non-linearity, and parameter degeneracy.

Methods
Here, we address this challenge using simulation-based inference (SBI), a machine-learning approach that allows efficient statistical inference of biophysical model parameters using only simulations [2]. We apply SBI to our previously validated biophysical model of hiPSC-derived neuronal networks on MEA[3], which includesHodgkin-Huxley-type neurons and detailed synaptic models (Fig. 1B). To train SBI, we simulated 300,000 network configurations, varying key parameters governing synaptic and intrinsic neuronal properties (Fig. 1C). We used a neural density estimator to infer posterior distributions of these model parameters given experimental MEA recordings from healthy, pharmacologically treated, and patient-derived networks (Fig 1D).

Results
SBI accurately inferred ground-truth parameters in synthetic data and successfully identified known disease mechanisms in patient-derived neuronal networks. In networks from patients with the genetic epilepsies Dravet Syndrome and GEFS+, SBI predicted reduced sodium and potassium conductances and increased synaptic depression, which was experimentally verified. InCACNA1Ahaploinsufficient networks, SBI correctly identified impaired connectivity. Additionally, SBI detected drug-induced changes, such as prolonged synaptic depression following Dynasore treatment.
Discussion
SBI enables automated and probabilistic inference of biophysical parameters, offering advantages over traditional parameter estimation methods, which can be time-consuming, lack uncertainty quantification, or cannot deal with parameter degeneracy. Our results show how SBI can be used with biophysical models to identify possible disease mechanisms from patient-derived neuronal data. Ourproposed analysis pipeline enables researchers to extract crucial mechanistic information from MEA measurements in a systematic, cost-effective, and rapid manner, paving the way for targeted experiments and novel insights into disease.






Figure 1. Figure 1. A) The activity of in vitro neuronal networks cultured from hiPSCs of healthy controls and patients is measured using MEAs. B) The computational model with biophysical parameters in blue. C) A Neural density estimator is trained on model simulations. Afterward, experimental data is passed through the estimator to approximate the D) posterior distributions. Adapted from [4].
Acknowledgements
This work was supported by the Netherlands Organisation for Health Research and Development ZonMW; BRAINMODEL PSIDER program 10250022110003 (to M.F.). We thank Eline van Hugte, Marina Hommersom, and Nael Nadif Kasri for providing MEA recordings from patient-derived and genome-editedin vitroneuronal networks.
References
● 1.https://doi.org/10.1016/J.STEMCR.2021.07.001
● 2.https://doi.org/10.7554/ELIFE.56261
● 3.https://doi.org/10.1016/J.STEMCR.2024.09.001
● 4.https://doi.org/10.1101/2024.05.23.595522


Speakers
Monday July 7, 2025 14:01 - 14:30 CEST
Auditorium - Plenary Room

14:30 CEST

O13: Linking hubness, embryonic neurogenesis, transcriptomics and diseases in human brain networks
Monday July 7, 2025 14:30 - 14:50 CEST
Linking hubness, embryonic neurogenesis, transcriptomics and diseases in human brain networks

Ibai Diez*1,2, Fernando Garcia-Moreno*3,4,5, Nayara Carral-Sainz6, Sebastiano Stramaglia7, Alicia Nieto-Reyes8, Mauro D’Amato5,9,10, Jesús Maria Cortes5,11,12,Paolo Bonifazi5,11,13


1Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
2Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA.
3Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain.
4Department of Neuroscience, Faculty of Medicine and Odontology, UPV/EHU, Barrio Sarriena s/n, Leioa, Bizkaia, Spain.
5IKERBASQUE: The Basque Foundation for Science, Bilbao, Spain.
6Departamento de Ciencias de la Tierra y Física de la Materia Condensada, Facultad de Ciencias, Universidad de Cantabria, Santander, Spain.
7Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, and INFN, Sezione di Bari, Italy.
8Departamento de Matemáticas, Estadística y Computación, Facultad de Ciencias, Universidad de Cantabria, Santander, Spain.
9Department of Medicine and Surgery, LUM University, Casamassima, Italy.
10Gastrointestinal Genetics Lab, CIC bioGUNE - BRTA, Derio, Spain.
11Computational Neuroimaging Lab, Biocruces-Bizkaia Health Research Institute, Barakaldo, Spain.
12Department of Cell Biology and Histology, University of the Basque Country (UPV/EHU), Leioa, Spain.
13Department of Physics, University of Bologna, Italy.


* These authors contributed equally to this work; Corresponding author:paol.bonifazi@gmail.com

Intro. The human brain is organized across multiple spatial scales, where micro-scale circuits integrate into macro-scale networks via long-range connections. Understanding the connectivity rules shaping networks is key to deciphering brain function and the effects of neurological damage. Previous studies have explored brain network maturation, but a link between adult connectivity and the sequential evolutionarily preserved neurogenesis remains unestablished. Inspired by the preferential attachment model in network science shaped by the “rich gets richer” principle 1, this study2hypothesizes that brain network topology follows an "older gets richer" principle, where earlier-developed circuits play central roles in adult connectivity. Our hypothesis extrapolates on the macro-scale level previous evidence that hippocampal hubs are early born GABAergic neurons3-5.
Methods. Brain circuits were categorized by their First neurogenic Time (FirsT), determined from developmental neuromeres. Eighteen macro-circuits (MACs) were identified based on available neurodevelopmental data. Structural and functional brain networks were reconstructed using 7-Tesla dMRI and resting-state fMRI from 184 subjects. Connectivity metrics were assessed at high (2,566 ROIs) and low (18 MACs) resolutions. Eigenvector centrality was calculated for each ROI and MAC, with correlations between FirsT and connectivity patterns. Brain transcriptomic data were mapped to connectivity metrics, and enrichment analysis identified associated biological processes and disease relevance.
Results. Significant correlations between structural connectivity and FirsT supported the "older gets richer" principle, with early-born circuits exhibiting higher structural hubness. In contrast, functional centrality was positively correlated with FirsT, highlighting late-maturing circuits' functional prominence. Connectivity strength was stronger among circuits with similar neurogenic timing, supporting a "preferential age attachment" mechanism. Gene expression analysis revealed correlations with FirsT and connectivity metrics, with enriched pathways linked to neurodevelopment, synaptic function, and neuropsychiatric disorders. Disease-associated genes (e.g., APOE for Alzheimer’s, SCN1A for epilepsy) showed significant enrichment at correlation extremes, suggesting differential genetic influences on brain network organization and pathology susceptibility.

Discussion. The study examines adult brain networks reconstructed from MRI to analyze how early neurogenesis affects structural and functional connectivity. Structural findings confirm that older brain regions act as stronger hubs ("older gets richer"). Functional and structural networks follow a "preferential age attachment" rule, linking neurogenesis timing to network topology. Genetic analysis ties neurodevelopmental disorders to network centrality, highlighting disease-linked transcriptional alterations.



Acknowledgements
We thank M De Pittá, D Papo, D Marinazzo, A Mazzoni, Y Ben-Ari for comments. Funds: ANR by MCIN/AEI/10.13039/501100011033 and “ERDF”; PB by Ikerbasque, the Ministerio Economia, Industria y Competitividad (MICINN, Spain) and Maratoia EITB (grant PID2021-127163NB-I00 and BIO22/ALZ/010/BCB);FGMby Ikerbasque, MICINN(grant PID2021-125156NB-I00) andBasque Gov (grant PIBA_2022_1_0027).
References
1.Barabási, A.-L. & Albert, R. Emergence of Scaling in Random Networks.Science286, 509–512 (1999).
2.Diez I et al, https://doi.org/10.1101/2022.04.01.486541
3.Bonifazi, P.et al.GABAergic Hub Neurons Orchestrate Synchrony in Developing Hippocampal Networks.Science326, 1419–1424 (2009).
4.Picardo, M. A.et al.Pioneer GABA Cells Comprise a Subpopulation of Hub Neurons in the Developing Hippocampus.Neuron71, 695–709 (2011).
5.Bocchio, M.et al.Hippocampal hub neurons maintain distinct connectivity throughout their lifetime.Nat Commun11, 4559 (2020).
Speakers
Monday July 7, 2025 14:30 - 14:50 CEST
Auditorium - Plenary Room

14:50 CEST

O14: Back to the Future: Integrating Event-Based and Network Diffusion Models to Predict Individual Tau Progression in Alzheimer's Disease
Monday July 7, 2025 14:50 - 15:10 CEST
Back to the Future: Integrating Event-Based and Network Diffusion Models to Predict Individual Tau Progression in Alzheimer's Disease

Robin Sandell*1, Justin Torok1, Kamalini Ranasinghe1, Srikantan Nagarajan1, Ashish Raj1

1Department of Radiology and Biomedical Imaging, University of California, San Francisco, United States

*Email: robin.sandell@ucsf.edu


Introduction

This paper presents a novel method combining an Event-Based Model (EBM) and a Network Diffusion Model (NDM) to predict individual tau protein progression in Alzheimer's disease. Statistical EBMs can infer longitudinal progression from cross-sectional data but lack mechanistic understanding, while biophysical NDMs provide mechanistic clarity but require data on a longitudinal timescale. Our hybrid approach overcomes these limitations. Using only single-visit data, our model can go back in time to infer initial seeding patterns and predict future progression. Analysis reveals high initial heterogeneity in seeding patterns that converges over time with two main seed archetypes correlating with distinct clinical presentations.



Methods


We analyzed data from 650 patients from the Alzheimer’s Neuroimaging Disease Initiative, including tau-PET, MRI, and cognitive scores. EBM assigned a disease stage to each patient based on their biomarker values, enabling a common timescale across subjects [1,2,]. NDM simulated tau progression on the brain’s structural connectivity with two rate parameters: τ accumulation rate and spread rate[3,4]. We optimized NDM parameters and tau seed pattern to accurately predict each subject’s empirical tau map at their EBM assigned stage. Applications include prediction of individuals’ future tau patterns, analysis of inter-subject heterogeneity over time, and identification of tau seed archetypes through clustering analysis.



Results

Our hybrid model successfully predicted empirical tau using individual tau seed patterns (mean R=0.85) (Fig. 1b). Longitudinal validation confirmed the model's predictive ability (mean R=0.81)(Fig. 1b). Analysis of tau patterns revealed decreasing heterogeneity over disease progression (Fig. 1c). Two primary seed archetypes emerged: focal entorhinal (typical AD) and diffuse temporal (Fig. 1d). The diffuse temporal pattern correlated with earlier disease onset, higher APOE4 carrier frequency, younger age, and faster tau accumulation rates, suggesting a more aggressive disease variant despite similar cognitive impairment levels at diagnosis (Fig. 1e,f).


Discussion

This paper presents a novel hybrid approach combining an Event-Based Model and a Network Diffusion Model to predict individual tau progression in Alzheimer's disease. The method infers initial seeding patterns from single-visit data to forecast future progression. Surprisingly, heterogeneity across subjects was highest at disease onset and decreased over time, suggesting convergence rather than divergence of pathology. Two primary seed archetypes emerged: focal entorhinal (typical AD) and diffuse temporal (associated with earlier onset, higher APOE4 frequency, and faster progression). The hybrid model outperformed prior work while providing mechanistic insights into tau progression that could inform personalized therapeutic strategies[2].








Figure 1. a. Project flow chart. b. Illustration of NDM model fitting and validation for a single patient. c. Distribution of pairwise R correlations between subjects model predicted tau at each stage indicating a process of convergence in tau patterning as disease progresses. d. Emergent tau seed archetypes. e. Demographic variables for each archetypes. f. Total tau over time for each archetypes.
Acknowledgements
We thank ADNI for making their data available to us.

References


● Aksman, L.M., et al. (2021). pySuStaIn: Python implementation of SuStaIn. SoftwareX, 16.https://doi.org/10.1016/j.softx.2021.100811
● Vogel, J.W., et al. (2021). Four trajectories of tau deposition in AD. Nature Medicine, 27(5).https://doi.org/10.1038/s41591-021-01309-6
● Raj, A., et al. (2012). Network diffusion model of disease progression. Neuron, 73(6).https://doi.org/10.1016/j.neuron.2011.12.040
● Anand, C., et al. (2022). Microglia effects on tauopathy using nexopathy models. Scientific Reports, 12(1).https://doi.org/10.1038/s41598-022-24687-4


Speakers
Monday July 7, 2025 14:50 - 15:10 CEST
Auditorium - Plenary Room

15:10 CEST

O15: The Virtual Parkinsonian Patient: the effects of L-dopa and Deep brain Stimulation on whole-brain dynamics
Monday July 7, 2025 15:10 - 15:30 CEST
The Virtual Parkinsonian Patient: the effects of L-dopa and Deep brain Stimulation on whole-brain dynamics

Marianna Angiolelli*1,2, Gabriele Casagrande1, Letizia Chiodo2, Damien Depannemaecker1, Viktor Jirsa1, Pierpaolo Sorrentino1,3


1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
2Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy
3Department of Biomedical Sciences, University of Sassari, Sassari, Italy


*Email: marianna.angiolelli@unicampus.it
IntroductionParkinson’s disease is a progressive neurodegenerative disease characterized by the loss of dopaminergic neurons in the substantia nigra. The primary treatment for PD involves the administration of levodopa, but long-term use of it is associated with complications, necessitating alternative therapeutic strategies. One approach is deep brain stimulation (DBS), a neuromodulatory treatment that delivers electrical stimulation to specific brain regions (most often,the subthalamic nucleus). While DBS can be an effective therapy, optimal stimulation parameters are specific to each patient, and finding them can be challenging. Today, parameter tuning is based on a trial-and-error process, which is time-consuming, exhausting for the patient, requires a highly skilled dedicated team, and has very high chances of missing the optimal setting.
MethodsTo predict the effects of DBS on large-scale brain dynamics, we employed a mean-field neural mass model based on the adaptive quadratic integrate-and-fire (aQIF) framework [1], where Dopamine is included. The model was extended to incorporate an external current simulating a biphasic stimulation, mimicking DBS effects. Each brain region was modeled as a neural mass, with connectivity based on individual structural connectomes. The model includes excitatory, inhibitory, and neuromodulatory connections. EEG and deep electrode recordings in the STN validated the predictions. A Bayesian inversion with DNN inferred the neural state in ON/OFF conditions, quantifying parameter uncertainty.
ResultsFirst, we investigate different conditions varying simulations of L-Dopa administration and then changing DBS parameters analyzing large-scale brain activity and its impact on neural avalanche (spontaneous bursts of activations) topological properties. For all patients, we correctly infer that the dopaminergic tone is higher given the dynamics observed after administration of L-Dopa, and lower before the administration of L-Dopa [2]. The same approach enables us to tell apart pre- and post-DBS states across multiple patients, quantifying the effects of stimulation on large-scale brain dynamics.
DiscussionThis work provides a framework to understand how L-Dopa and DBS influence large-scale neural activity, offering insights into mechanisms and optimization for PD treatment. Unlike most PD models focusing on beta-range activity [3], we emphasize aperiodic activities instead, which have only received limited attention in Parkinson’s disease thus far. Furthermore, we focus on large-scale dynamics and efficient parameter estimation with uncertainty, rather than fitting a cost function. This approach explicitly accounts for Dopamine levels and stimulation amplitude, bridging pathophysiology, and personalized clinical predictions of clinical effectiveness.



Acknowledgements
The project leading to this publication has received funding from the Excellence Initiative of Aix-Marseille Université - A*Midex, a French “Investissements d’Avenir programme” AMX-21-IET-017
References
[1] Depannemaecker, D., Duprat, C., Casagrande, G., Saggio, M., Athanasiadis, A. P., Angiolelli, M., ... & Jirsa, V. (2024). A next generation neural mass model with neuromodulation. bioRxiv, 2024-06.
[2] Angiolelli, M., Depannemaecker, D., ... & Sorrentino, P. (2024). The Virtual Parkinsonian Patient. medRxiv, 2024-07.
[3] Meier, J. M., Perdikis, D., Blickensdörfer, A., ... & Ritter, P. (2022). Virtual deep brain stimulation: Multiscale co-simulation of a spiking basal ganglia model and a whole-brain mean-field model with The Virtual Brain. Experimental Neurology, 354, 114111.

Monday July 7, 2025 15:10 - 15:30 CEST
Auditorium - Plenary Room

15:30 CEST

O16: Cortical Oscillatory Dynamics in Parkinsonian Networks: Biomarkers and the Potential of Theta Frequency Stimulation
Monday July 7, 2025 15:30 - 15:50 CEST
Cortical Oscillatory Dynamics in Parkinsonian Networks: Biomarkers and the Potential of Theta Frequency Stimulation

June Jung1, Donald W Doherty1, Adam Newton1, Adriana Galvan5, Thomas Wichmann5, Salvador Dura-Bernal1, Hong-Yuan Chu4, Samuel Neymotin3, William W Lytton1,2

1Department of Physiology and Pharmacology, SUNY Downstate Medical Center, NY,2Kings County Hospital, Brooklyn, NY, USA3Nathan Kline Institute, Orangeburg, NY, USA,4Georgetown University, DC, USA,5Emory University, Atlanta, GA, USA
Introduction

Parkinson’s disease (PD) is marked by characteristic motor symptoms including tremors, stiffness, slowed movement, and balance issues, along with non-motor symptoms like cognitive challenges and difficulty making decisions. Symptom onset and severity vary among individuals. While dopaminergic neuron degeneration in the substantia nigra pars compacta (SNc) is a primary cause of motor dysfunction, recent studies highlight the critical role of disrupted oscillatory activity in the primary motor cortex (M1) in PD pathology. Mouse models of PD, including 6-OHDA mouse, and mitoPark mouse have shown reduced excitability in corticospinal pyramidal tract (PT) neurons.
Methods
We adapted an established mouse primary motor cortex (M1) framework, to make a Parkinsonian motor cortex (PD M1) computational model to investigate changes in neural oscillatory activity. The model incorporated experimental observations from the MitoPark mouse including decreased PT intrinsic excitability and reduced thalamocortical synapse strength to PT neurons (decreased 25% at 16–18 weeks; 50% at 25–28 weeks), correlating with disease progression. Multiple oscillation measures were analyzed as potential biomarkers for tracking disease severity.
Results
In vitro results were used to simulate in vivo Parkinsonian cortical activity, revealing progressively disrupted neuronal firing and increased beta oscillations (~20 Hz) with disease progression.Beta-gamma coupling and modulation index were two oscillatory measures that were significantly reduced under Parkinsonian conditions, with the modulation index progressively declining as the disease advanced. Theta-frequency stimulation suppressed beta bursts, enhanced beta-gamma coupling, and partially restored disrupted cortical network activity caused by PD pathophysiology.
Discussion
These findings suggest that the modulation index may serve as a biomarker for tracking Parkinsonian disease severity. Moreover, theta-frequency stimulation of inhibitory interneurons may help restore imbalanced cortical oscillations and could offer an alternative or complementary strategy to current high-frequency deep brain stimulation (DBS) of the subthalamic nucleus (STN) in PD patients.





AcknowledgementsThis research was funded in part by Aligning Science Across Parkinson’s [ASAP-020572] through the Michael J. Fox Foundation for Parkinson’s Research (MJFF). For the purpose of open access, the author has applied a CC BY public copyright license to all Author Accepted Manuscripts arising from this submission.
References
1. Chen, L., Daniels, S., Kim, Y., & Chu, H.-Y. (2021). Decreased excitability of motor cortical neurons in parkinsonism. Journal of Neuroscience, 41(25), 5553–5565.https://doi.org/10.1523/JNEUROSCI.2694-20.2021

2. Dura-Bernal, S., et al. (2023). Multiscale model of M1 circuits. Cell Reports, 42(6), 112574.https://doi.org/10.1016/j.celrep.2023.112574
Monday July 7, 2025 15:30 - 15:50 CEST
Auditorium - Plenary Room

15:50 CEST

Coffee break
Monday July 7, 2025 15:50 - 16:20 CEST
Monday July 7, 2025 15:50 - 16:20 CEST

16:20 CEST

Poster session 2
Monday July 7, 2025 16:20 - 18:20 CEST
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P001: A Recurrent Neural Network Model of Cognitive Map Development
Monday July 7, 2025 16:20 - 18:20 CEST
P001 A Recurrent Neural Network Model of Cognitive Map Development

Marco P. Abrate*1, Tom J. Wills1, Caswell Barry1

1Department of Cell and Developmental Biology, University College London, London, UK

*Email: marco.abrate@ucl.ac.uk
Introduction
Animals use an allocentric cognitive map of self-location, constructed from sequential egocentric observations, to navigate flexibly[1-4]. In the hippocampal formation, spatially modulated neurons support navigation, such as place cells, Head Direction (HD) cells, grid cells, and boundary cells[5-8]. The early development of these neurons is well characterised[9] but the mechanisms driving maturation and the relative timing of their emergence are unclear. We hypothesize that changes in locomotion shape the development of spatial representations. Combining behavioural analysis with a Recurrent Neural Network (RNN), we prove that movement statistics determine the development of spatial tuning, mirroring biological timelines.
Methods
Rats from post-natal day 12 (P12) to P25[10-12] were grouped according to their movement statistics. Rodent trajectories were simulated, using the RatInABox toolbox[13], in a square arena matching these locomotion stages. An RNN was trained to predict upcoming visual stimuli based on previous visual and vestibular inputs - mimicking the predictive coding function of biological systems[14]. The hidden units' activity was analysed against the position and the facing direction of the agent. Finally, these units were classified as place units based on their spatial information content or as HD units based on their Rayleigh vector length and KL divergence vs a uniform distribution - standard metrics for hippocampal neural recordings.
Results
Behavioural analysis revealed three distinct stages of locomotion during development with median ages P14, P15, and P21, respectively (Fig. 1a). The RNN trained on adult-like locomotion (Fig. 1b), solving the predictive task with biologically plausible inputs, showed spatially tuned units resembling hippocampal place and head direction cells (Fig. 1c). Crucially, when trained separately on simulated locomotion styles corresponding to the identified developmental stages, the model recapitulated the progressive emergence of spatial tuning observed experimentally. Specifically, spatial measures and consequently the number of units classified as place and head direction neurons steadily increased with improved locomotion (Fig. 1d).
Discussion
Our model establishes locomotion-dependent sensory sampling as a sufficient mechanism for cognitive map formation, extending predictive coding theories[3,4,15]. The RNN's ability to replicate spatial cell maturation patterns suggests that sensory-motor experience significantly shapes hippocampal spatial tuning. Furthermore, our results inform how manipulations of locomotion or sensory inputs could influence the development of spatial representations, which can then be tested in real-world experiments. Future work will directly compare the RNN's units with hippocampal neurons through Representational Similarity Analysis, search what drives grid patterns formation in our model, and investigate the changes in the geometry of the latent space.
Figure 1. (a) 3-d UMAP representation of rats’ movement statistics coloured into locomotion stages. (b) Example of an agent’s trajectory (left) and snapshot of current visual input (right). (c) Architecture of the RNN. The latent space’s units are analysed for spatial responses. (d) Trend in the number of the RNN’s units classified as Place units (left), HD units (right), or Place and HD units (both).
Acknowledgements
NA
References

1. doi.org/10.1017/S0140525X00063949
2. doi.org/10.1037/h0061626
3. doi.org/10.1016/j.tics.2018.07.006
4. doi.org/10.1038/nn.4650
5. doi.org/10.1016/0006-8993(71)90358-1
6. doi.org/10.1523/JNEUROSCI.10-02-00420.1990
7. doi.org/10.1038/nature03721
8. doi.org/10.1523/JNEUROSCI.1319-09.2009
9. doi.org/10.1002/wcs.1424
10. doi.org/10.1126/science.1188224
11. doi.org/10.1016/j.neuron.2015.05.011
12. doi.org/10.1016/j.cub.2019.01.005
13. doi.org/10.7554/eLife.85274
14. doi.org/10.1017/S0140525X12000477
15. doi.org/10.1016/j.cell.2020.10.024
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:Aptos; panose-1:2 11 0 4 2 2 2 2 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:536871559 3 0 0 415 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0cm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Aptos",sans-serif; mso-ascii-font-family:Aptos; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Aptos; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Aptos; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"Aptos",sans-serif; mso-ascii-font-family:Aptos; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Aptos; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Aptos; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}div.WordSection1 {page:WordSection1;}

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P108: Bayesian Inference Across Brain Scales
Monday July 7, 2025 16:20 - 18:20 CEST
P108 Bayesian Inference Across Brain Scales

M. Hashemi*1, N Baldy1, A. Ziaeemehr1, A. Esmaeili1, S. Petkoski1, M. Woodman1, V. Jirsa*1

1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
Email :Meysam.hashemi@univ-amu.fr/viktor.jirsa@univ-amu.fr


Introduction

The process of inference across spatiotemporalscalesis essential toidentifythe underlyingcausalmechanisms of brain computation and (dys)function. However, there remains a critical need for automated model inversion tools to estimate control (bifurcation) parameters from recordings acrossbrainscales, ideally including uncertainty.

Methods
In this work, we attempt to bridge this gap by providing efficient and automatic Bayesian inference operating across scales. We usethestate-of-the-art probabilistic machine learning tools employing likelihood-based (MCMC sampling [1, 2]) and likelihood-free (a.k.a. simulation-based inference [3, 4]) approaches.

Results
We demonstrate inference on the parameters and dynamics of spiking neurons, their mean-field approximation at the regional level, and brain network models. We show the benefits of incorporatingprior andinference diagnostics, leveraging self-tuning Monte Carlo strategies for unbiased sampling, and deep density estimators for efficient transformations[5]. The performance of these methods is then demonstratedfor causal inference inepilepsy [6], multiple sclerosis [7], focal intervention [8], healthy aging [9], and social facilitation [10].

Discussion
This work shows potential to improve hypothesis evaluation in across brain scales through uncertainty quantification, and contribute to advances in precision medicine by enhancing the predictive power of brain models.
Figure 1. Bayesian inference across brain scales. (A) Based on Bayes’ theorem, background knowledge about control parameters (expressed as a prior distribution), is combined with information from observed data (in the form of a likelihood function) to determine the posterior distribution. (B) Examples of the observed and predicted data features.
Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project), No. 101137289 (Virtual Brain Twin Project), and government grant managed by the Agence Nationale de la Recherche reference ANR-22-PESN-0012 (France 2030 program).
References

[1] DOIhttps://doi.org/10.1016/j.neuroimage.2020.116839
[2]DOI:10.1162/neco_a_01701
[3]DOI:10.1088/2632-2153/ad6230
[4]Doi:https://doi.org/10.1101/2025.01.21.633922
[5]DOI:https://doi.org/10.1101/2024.10.25.620245
[6]DOI:https://doi.org/10.1016/j.neunet.2023.03.040
[7]DOI:10.1016/j.isci.2024.110101
[8]DOI:https://doi.org/10.1101/2023.09.08.556815
[9]DOI:https://doi.org/10.1016/j.neuroimage.2023.120403
[10]DOI:https://doi.org/10.1101/2024.09.09.612006

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P114: Modelling the bistable cortical dynamics of the sleep-onset period
Monday July 7, 2025 16:20 - 18:20 CEST
P114 Modelling the bistable cortical dynamics of the sleep-onset period

Zhenxing Hu*1, Manaoj Aravind1, Nathan Kutz2, Jean-Julien Aucouturier1

1Universit´e Marie et Louis Pasteur, SUPMICROTECH, CNRS, institut FEMTO-ST, F-25000 Besancon, France
2Department of Applied Mathematics and Electrical and Computer Engineering, University of Washington, Seattle USA


*Email: zhenxing.hu@femto-st.fr

Introduction

The sleep-onset period (SOP) exhibits dynamic and non-monotonous changes of electroencephalogram (EEG) with high, and so far poorly understood, inter-individual variability. Computational models of the sleep regulation network have suggested that the transition to sleep can be viewed as a noisy bifurcation [1], at a saddle point which is determined by an underlying control signal or ‘sleep drive‘. However, such models do not describe how internal control signals in the SOP can produce repeated switches between stable wake and sleep states. Hence, we proposed a minimal parameterized stochastic dynamic model (Fig. 1) inspired by the modelling of C. Elegan's backward and forward motion.
Methods
We apply a data-driven embedding strategy for high-dimensional EEG time-frequency signals via interpolating the first SVD mode in wake and sleep states, paird with a parsimonious stochastic dynamical model with a quartic potential function, in which one slowly-varying control parameter drives the wake-to-sleep transition while exhibiting noise-driven bistability. Also, we provide a procedure based on Markov Chain Monte Carlo (MCMC) for estimating the parameters of the model given single observations of experimental sleep EEG data.
Results
In simulation, we found the interactions between the rate of landscape change and noise-leve could reproduce a wide-variety of SOP phenomenology. Besides, using the model to analyze a pre-existing sleep EEG dataset, we found that the estimated model parameters correlate with both subjective sleep reports and objective hypnogram metrics, suggesting that the bistable characteristics of the SOP influence the characteristics of subsequent sleep.
Discussion
Our findings extend and integrate several threads of prior research on SOP dynamics and modeling. Early mechanistic frameworks of sleep-wake regulation (e.g. the two-process model [2] and “flip-flop” switching circuits [3] ) established the concept of a bistable control of sleep and wake states, but these models usually involve many variables and parameters, making them difficult to fit directly to EEG data. Further, our model explicitly captures the SOP dynamics through stochastic dynamical systems, which effectively characterizes the continuous and stochastic nature of sleep-onset phenomena observed empirically, including intermittent reversals or ”flickering” between wake-like and sleep-like states.



Figure 1. Fig 1. Study overview. The sleep-onset period (SOP) has a strongly bistable phenomenology, marked by a non-monotonous decrease of the EEG frequency and high inter-individual variability, seen here in three illustrative spectrograms (top). We model the bistable cortical dynamics of the SOP with a minimally-parameterized stochastic dynamical system.
Acknowledgements
This work is supported by the Marie Skłodowska-Curie Actions (MSCA) Doctoral Networks ( Lullabyte).
References
[1] Yang, D. P., McKenzie-Sell, L., Karanjai, A., & Robinson, P. A. (2016). Wake-sleep transition as a noisy bifurcation.Physical Review E,94(2), 022412.https://doi.org/10.1103/PhysRevE.94.022412
[2]Borbély, A. A., Daan, S., Wirz‐Justice, A., & Deboer, T. (2016). The two‐process model of sleep regulation: a reappraisal.Journal of sleep research,25(2), 131-143.https://doi.org/10.1111/jsr.12371
[3] Lu, J., Sherman, D., Devor, M., & Saper, C. B. (2006). A putative flip–flop switch for control of REM sleep.Nature,441(7093), 589-594.https://doi.org/10.1038/nature04767
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P115: Structural and Functional Brain Differences in Autistic Aging Using Graph Theoretic Analysis
Monday July 7, 2025 16:20 - 18:20 CEST
P115 Structural and Functional Brain Differences in Autistic Aging Using Graph Theoretic Analysis

Dominique Hughes*1, B. Blair Braden2, Sharon Crook1

1School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona, United States of America
2College of Health Solutions, Arizona State University, Tempe, Arizona, United States of America

*Email: dhughe13@asu.edu

Introduction

Recent research indicates that people with autism (ASD) have increased risk for early-onset dementia and other neurodegenerative diseases [1,2,3]. Prior research has found brain differences related to age between ASD and neurotypical (NT) populations, but the ways these differences contribute to increased risk during aging remain unclear [4,5,6]. Our work employs graph theory to analyze structural and functional brain scans from ASD and NT individuals. We use linear regression to identify brain graph measures where age by diagnosis interaction (ADI) is a significant factor in determining graph measure values.

Methods
We obtained T1, diffusion and functional MRI scans from 96 individuals aged 40-75, (n=48 ASD, mean age = 56.4, n=48 NT, mean age = 57.3). The TVB-UKBB and CONN data processing pipelines extract white matter tract weights and functional connectivity values, respectively, between regions listed in the Regional Map 96 brain parcellation [7,8,9]. We conduct 50% consensus thresholding to remove spurious weights. Strength values are found using the Brain Connectivity Toolbox on the structural and functional connectivity matrices [10]. We conduct linear regression to determine if age by diagnosis interaction is a significant predictor of the strength values.
Results
For the structural graphs, ADI was a significant predictor (p<0.01) for strength values for areas of the right and left prefrontal cortex. For the functional graphs, ADI was a significant predictor for strength values for areas of the right prefrontal, parahippocampal, auditory, sensory, and premotor cortices, and the left prefrontal, gustatory and visual cortices.
Discussion
ADI significantly predicted functional strengths over a range of cortices, while structural measures were more selective and varied. Strength values quantifying the prefrontal cortex particularly are significantly predicted by ADI in both structural and functional graph measures. The difference between functional and structural results demonstrate the complexity of identifying ASD specific aging trajectories. To better understand how these measures may affect increased cognitive decline in ASD, future work will analyze the relationship between these graph measures and cognition measures recorded from the same 96 individuals.




Acknowledgements
We would like to acknowledge funding sources for our project, theNational Institute on Aging [P30 AG072980], theNational Institute of Mental Health [R01MH132746; K01MH116098], theDepartment of Defense [AR140105], and theArizona Biomedical Research Commission [ADHS16-162413].
References


https://doi.org/10.1186/s11689-015-9125-6

https://doi.org/10.1002/aur.2590

https://doi.org/10.1177/1362361319890793

https://doi.org/10.1016/j.rasd.2019.03.005

https://doi.org/10.1002/hbm.23345

https://doi.org/10.1016/j.rasd.2019.02.008

https://doi.org/10.3389/fninf.2022.883223

https://doi.org/10.1089/brain.2012.0073

https://doi.org/10.1002/hbm.23506

https://doi.org/10.1016/j.neuroimage.2009.10.003


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P116: Phase-locking patterns in oscillatory neural networks with diverse inhibitory populations
Monday July 7, 2025 16:20 - 18:20 CEST
P116 Phase-locking patterns in oscillatory neural networks with diverse inhibitory populations

Aïda Cunill1, Marina Vegué1,Gemma Huguet*1,2,3

1Department of Mathematics, Universitat Politècnica de Catalunya, Barcelona, Spain
2Institute of Mathematics Barcelona-Tech (IMTech), Universitat Politècnica de Catalunya, Barcelona, Spain
3Centre de Recerca Matemàtica, Barcelona, Spain

*Email: gemma.huguet@upc.edu

Introduction.Brain oscillations play a crucial role in cognitive processes, yet their precise function is not completely understood. Communication through coherence theory [1] suggests that rhythms regulate information flow between neural populations: to communicate effectively, neural populations must synchronize their rhythmic activity. Studies on gamma-frequency oscillations have shown that when input frequency exceeds the target oscillator's natural frequency, oscillators phase-lock in an optimal phase relationship for effective communication [2,3]. Inhibitory neurons play a crucial role in modulating cortical oscillations, and exhibit diverse biophysical properties. We explore theoretically how diverse inhibitory populations influence oscillatory dynamics.


Methods.We use exact mean-field models [4,5] to explore how different inhibitory populations shape cortical oscillations and influence neural communication. We consider a neural network that includes one excitatory population and two distinct inhibitory populations with a network connectivity inspired in cortical circuits. The network receives an external periodic excitatory input in the gamma frequency range, simulating the input from other oscillating neural populations. We use phase-reduction techniques to identify the phase-locked states between the input and the target population as a function of the amplitude, frequency and coherence of the inputs. We propose several factors to measure communication between neural oscillators.
Results.We have developed a theoretical framework to study the conditions for effective communication, exploring the role of different types of inhibitory neurons. We compare phase-locking and synchronization properties in networks with either a single or two distinct inhibitory populations. In a network with a single inhibitory population,communication is only effective for inputs that are faster than the natural frequency of the target oscillator. The inclusion of a second inhibitory population with slower synapses expands 1:1 phase-locking range to both higher and lower frequency inputs and improves the encoding of inputs with frequencies near the natural gamma rhythm of the target oscillator.
Discussion.Our results contribute to understand how different types of inhibitory populations regulate the timing and coordination of neural activity through mean-field models and mathematical analysis. We identify the role of different types of inhibition in generating and maintaining distinct phase-locking patterns, which are essential for communication between brain regions.



Acknowledgements
Work produced with the support of the grant PID-2021-122954NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and “ERDF: A way of making Europe”, the Maria de Maeztu Award for Centers and Units of Excellence in R&D (CEX2020-001084-M) and the AGAUR project 2021SGR1039.
References
1.https://doi.org/10.1016/j.neuron.2015.09.034
2.https://doi.org/10.1111/ejn.12453
3.https://doi.org/10.1371/journal.pcbi.1009342
4.https://doi.org/10.1103/PhysRevX.5.021028
5.https://doi.org/10.1371/journal.pcbi.1007019


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P117: Layer- and Area-Specific Dynamics and Function in Spiking Cortical Neuronal Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P117 Layer- and Area-Specific Dynamics and Function in Spiking Cortical Neuronal Networks





M. Sharif Hussainyar1*, Dong Li1, Claus C. Hilgetag1


1Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf(UKE), 20251, Hamburg, Germany


*Email: m.hussainyar@uke.de



Introduction
The cerebral cortex exhibits significant regional diversity, with areas varying in neuron density and the morphology of specific layers [1]. A spectrum of cortical types ranges from agranular, lacking layer 4, to granular, with a well-differentiated layer 4 [2,3]. These structural differences are relevant to cortical connectivity [4,5] and information flow. Correspondingly, different cortical areas and layers exhibit distinct dynamics and functions [6] underlying their computational roles, with faster neuronal timescales supporting sensory processing and slower dynamics in association areas [7,8]. However, how structural variations across cortical types shape these properties remains unclear.




Methods
We developed a series of spiking network models to simulate different cortical types. Each model consists of leaky integrate-and-fire neurons organized into layers preserving critical structural features, such as the excitatory-inhibitory ratio, layer-specific neuron distributions, and interlaminar connections. To compare evolutionary cortical variations, we parameterized models for three distinct exemplars: rodents, non-human primates and humans, accounting for species-specific differences in cortical organization, neuronal density, and laminar structure patterns [9]. This approach allows us to examine how structural variations shape timescales and baseline activity across cortical types and species.


Results
Fundamental dynamical properties such as timescale and baseline activity differ systematically between cortical types and layers. Granular types, exemplified by microcolumns in the visual system, exhibit shorter timescales than agranular types characteristic of association areas. These differential timescales imply functional specialization, where shorter timescales support rapid sensory processing, while longer timescales in agranular regions facilitate integrative functions requiring extended temporal windows. These findings align with experimental evidence and previous theoretical findings [6,8], and reinforce the hypothesis that structural variations shape cortical dynamics.
Discussion
Our findings confirm that structural variations shape cortical dynamics and function. The observed timescales differences between cortical types align with experimental data and support computational theories of functional specialization [8]. The cortical-type-based connectivities, along with the integrate-and-fire nature of cortical neurons, establish the foundation for area- and layer-specific cortical timescales and baseline activity. These, in turn, define the fundamental functional units by shaping how different cortical areas and layers process and integrate information from external inputs.






Acknowledgements
This work was in part founded by the, SH: Landesforschungsförderung Hamburg(LFF)-FV76. DL:TRR169-A2. CCH: Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), SFB 936, Project-ID 178316478-A1/Z3; Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 434434223 SFB 1461; DFG TRR-169 (A2).

References
[1]https://doi.org/10.1007/s00429-019-01841-9
[2]https://doi.org/10.3389/fnana.2014.00165
[3]https://doi.org/10.1016/j.neuroimage.2016.04.017
[4]https://doi.org/10.1371/journal.pbio.2005346
[5]https://doi.org/10.1093/cercor/7.7.635
[6]https://doi.org/10.1073/pnas.2415695121
[7]https://doi.org/10.1073/pnas.2110274119
[8]https://www.nature.com/articles/nn.3862
[9]https://doi.org/10.1007/s00429-022-02548-0
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P118: Simulating interictal epileptiform discharges in a whole-brain neural mass model with calcium-mediated bidirectional plasticity
Monday July 7, 2025 16:20 - 18:20 CEST
P118 Simulating interictal epileptiform discharges in a whole-brain neural mass model with calcium-mediated bidirectional plasticity

MehmetAlihanKayabas1, Elif Köksal-Ersöz2,3,Linda-Iris JosephTomy1,Pascal Benquet1, Isabelle Merlet1, Fabrice Wendling1
¹UnivRennes, INSERM, LTSI – UMR 1099, Rennes F-35000, France
²InriaLyonResearchCentre, Villeurbanne 69603, France
³CophyTeam, Lyon NeuroscienceResearchCenter, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Bron 69500, France
*Email:malihankayabas@gmail.com
Introduction

Whole-brainmodeling of interictal epileptiform dischargesoffers a promising approach to optimize transcranial direct current stimulation (tDCS) protocols, byidentifyingspecific regions of the brain involved in the epileptic network [1,2]. In this study, we investigated the synaptic plasticity induced bytDCSin a whole-brain network of connected neural mass models (NMMs) [3, 4], which we extended by implementing the calcium-mediated synaptic plasticity mechanisms based on our recent study [5]. We studied the impact oftwolocal parameters:synapticdepression(θd)andpotentiation(θp)thresholds, on long-term depression (LTD) and long-term potentiation (LTP) undertDCS.

Methods
The activity of each node of the network was simulated by NMMs including excitatory and inhibitory neuronal subpopulations. The nodes are interconnected by a structural connectivity matrix fromthe HumanConnectome Project [6]. We tuned parameters of NMMstosimulateinterictal epileptiformdischarges (IEDs), alpha-band activity (8-12 Hz), and background activity. We assumed that the electrical stimulation affects the mean membrane potential of excitatory neuronal subpopulations. We varied thedepression andpotentiation threshold parametersin different subnetworks and simulated the system for 15 min for each condition.Two metrics were evaluated:Functional connectivity calculated using a non-linear correlation coefficient, and mean amplitude per channel.
Results
Under most conditions both the signal amplitude and node strength decreased.(Fig. 1).Except forall_nodes_θdconditionwherethedepression thresholdwas increasedacross all nodeswhichreduced LTD activity, resulting in an increase in strength of epileptic nodes.Regionally,the parietal nodesshowedthemost significantreductionswhile the frontal nodesthe least significantvariations. An increase in potentiation threshold across all nodes (all_nodes_θpcondition) resulted inthehighestreduction in bothamplitude and strength.,When bothθdandθpwere increased simultaneously,the decrease in strength of epileptic nodes was even more pronounced, while increasingθpalone in the occipital nodes did not yield a reduction in epileptic node strength.
Discussion
Variation insynapticplasticity thresholds alters whole-brain network dynamics. In nodesexhibitingalpha-band activity, decreased node strength lowers signal amplitude without changing frequency. Inepileptogenic nodes, reduced node strength leads to lower IED frequency anddesynchronization between two regions of the epileptogenic zone, while increased strength has the opposite effect.As of this day, there is no consensus in the literature on the effect oftDCSon alpha-band[7,8] or on the IED frequency [9, 10].In future studies, we willstudyour model further to elucidate the mechanisms and role oftDCStreatmentin focal epilepsy.




Figure 1. (A) percentage difference in node strength relative to basal level. (B) percentage difference in amplitude relative to basal level. (C) Examples of LFP signals for left lateral occipital (alpha) and precentral (epileptic) nodes before (blue) and after (red) the increase in potentiation threshold. θp: Potentiation threshold and θd: Depotentiation threshold.* Denotes p-value < 0.05 Kruskal-Wallis
Acknowledgements
This project has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (No 855109).
References
1.https://doi.org/10.1093/med/9780199746545.003.0017
2.https://doi.org/10.1093/brain/awz269
3.https://doi.org/10.1016/j.softx.2024.101924
4. https://doi.org/10.1088/1741-2552/ac8fb4
5.https://doi.org/10.1371/journal.pcbi.1012666
6.https://doi.org/10.1016/j.neuroimage.2021.118543
7.https://doi.org/10.3389/fnhum.2013.00529
8.https://doi.org/10.1038/s41598-020-75861-5
9.https://10.1016/j.brs.2016.12.005
10.https://doi.org/10.1016/j.eplepsyres.2024.107320
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P119: Computational modeling of the cumulative neuroplastic effects of repeated direct current stimulation
Monday July 7, 2025 16:20 - 18:20 CEST
P119 Computational modeling of the cumulative neuroplastic effects of repeated direct current stimulation

Linda-Iris JosephTomy1, Elif Köksal-Ersöz2,3,MehmetAlihanKayabas1, Pascal Benquet1, Fabrice Wendling1
¹UnivRennes, INSERM, LTSI – UMR 1099, Rennes F-35000, France
²InriaLyonResearchCentre, Villeurbanne 69603, France
³CophyTeam, Lyon NeuroscienceResearchCenter, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Bron 69500, France
*Email:malihankayabas@gmail.com
Introduction

Metaplasticitymodulates theneuroplasticabilityofneurons/synapsesin orderto maintain it withinafunctionalphysiological range.In conditionssuch asepilepsy, where neuroplasticitymayevolve pathologically, this metaplastic property of synapseswould also bedisrupted[1].Transcranial direct current stimulation (tDCS)isa non-invasivetechniquethatcan modulateneuroplasticity.Repeatedsessions oftDCScanimprove the likelihood of inducingseizure reduction in patients with refractory epilepsy[2].TheeffectoftDCSon neuroplasticityhas also been shown todependon the ongoingneuronalactivity andneuroplastic propertiesofthe stimulated brain regions.

Methods
Computational modeling[3]was usedtoidentify‘functional’ and‘dysfunctional’ metaplastic conditions.The modelconsistedof an epileptogeniczone(EZ)connectedto anirritatedzone (IZ).We assumedthe potentiation threshold(ϴp)discriminatedbetween the‘functional’and‘dysfunctional’metaplastic conditions.We evaluated the variation in connectivity strengthbyinitiatingthe modelfrom depressed and potentiated states for the metaplastic conditionsfordifferent frequencies of interictal activity from the EZ.Theeffectof repeatedtDCSwasinvestigated.Variations in connectivity strengthfor different frequencies ofongoing neuronalactivitywere assessedby plottingthe frequency response function (FRF).
Results
In the‘functional’metaplastic condition, the connectivity strength from EZ to IZwasprevented from being potentiated orevolved towards depression.Whereas, in the ‘dysfunctional’ metaplastic condition, theconnectivity strength tendedto evolve towards potentiation.Further,a decrease in ϴpled totheexpansion of epileptic activity in this network.Under repetitivetDCSapplication,weobserveda downwardshift inthe FRF, suggesting that repetitivetDCScould promotelong term depression.
Discussion
In thisstudy, we exploredhow functional and dysfunctionalmetaplasticconditions affect neuroplasticityin an epileptic network.Theimpact of varying ϴpto switch between thesemetaplasticconditionsreflectedthe relationship betweenmetaplasticityandepileptogenicity, as also seen in animal studies [1].Based on the variationsin the FRF observed here, it may be possible to designtDCSprotocolsto depressthe connectivityfromthe EZ to other IZs. Thismay thenimprovestimulationoutcomes.



Acknowledgements
This project has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (No 855109).
References
● https://doi.org/10.1371/journal.pcbi.1012666

● https://doi.org/10.1155/2017/8087401
● https://doi.org/10.1016/j.brs.2019.09.006


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P120: Critical neuronal avalanches emerge from excitation-inhibition balanced spontaneous activity
Monday July 7, 2025 16:20 - 18:20 CEST
P120 Critical neuronal avalanches emerge from excitation-inhibition balanced spontaneous activity

Maxime Janbon1, Mateo Amortegui1, Enrique Hansen1, Sarah Nourin1, Germán Sumbre1,Adrián Ponce-Alvarez*2,3,4 


1Institut de Biologie de l’ENS (IBENS), Département de biologie, École normale supérieure, CNRS, INSERM, Université PSL, 75005 Paris, France
2Departament de Matemàtiques, Universitat Politècnica de Catalunya, 08028 Barcelona, Spain.
3Institut de Matemàtiques de la UPC - Barcelona Tech (IMTech), Barcelona, Spain.
4Centre de Recerca Matemàtica, Barcelona, Spain.


*Email : adrian.ponce@upc.edu

Introduction

Neural systems exhibit cascading activity patterns called neuronal avalanches that follow power-law statistics, a hallmark of critical systems. Theoretical models [1,2] and in vitro studies [3] suggest that the excitation-inhibition (E/I) balance is a key factor in the self-organization of criticality. However, how E and I dynamics evolve and interact during in vivo neuronal avalanches remains unclear.
Here, we investigated E and I neuron contributions to spontaneous neuronal avalanches using double-transgenic zebrafish expressing cell-type-specific fluorescence proteins and calcium indicators. Furthermore, we built a stochastic E-I network model to explore how critical avalanches depend on the E/I ratio.
Methods
We monitored spontaneous neuronal activity in the optic tectum of 10 zebrafish larvae using selective-plane illumination microscopy (SPIM). Double-transgenic larvae expressing GCaMP6f and mCherry under the Vglut2a promoter (glutamatergic) were combined with immunostaining for GABAergic and cholinergic neurons, allowing identification of excitatory (E), inhibitory (I), and cholinergic (Ch) neurons.
We modelled the collective activity of E and I neurons using a model of critical dynamics that combines stochastic Wilson-Cowan equations[1,4],spatial embedded neuronal connectivity,and a spike-to-fluorescence convolutional model. Critical avalanches arise throughbalanced amplification[1] at a phase transition.
Results
Our results show that spontaneous fluctuations in E and I activity influenced neuronal avalanche statistics in the zebrafish optic tectum. Neuronal avalanches approached criticality when excitatory and inhibitory activity were balanced. Notably, the model accurately captured the observed avalanche statistics and their sensitivity to E/I fluctuations around a critical point defined by balanced excitatory and inhibitory synaptic strengths. Furthermore, the model allowed us to evaluate the statistics of neuronal avalanches derived from different simulated signals, representing calcium events or spiking activity. For both signals, the model's critical exponents align with experimental findings from calcium imaging and electrophysiology [5].
Discussion
Extensive research underscores the functional benefits of E/I balance and critical dynamics. Balanced networks enhance signal amplification, response selectivity, noise reduction, stability, memory, and plasticity [6-8], while critical dynamics optimize information processing [9-11]. Here, we showed that neuronal avalanche statistics and their dependence on spontaneous E/I fluctuations in the zebrafish optic tectum align with a model reaching criticality for balanced E and I couplings. Our study provides a framework to dissect the relationship between criticality and E/I balance, by manipulating the E/I ratio in vivo. Future integration of optogenetics into the present experiments and model will further clarify this interplay.



Acknowledgements
This study was supported by the Project PID2022-137708NB-I00 funded by MICIU/AEI /10.13039/501100011033 and FEDER, UE.A. Ponce-Alvarezwas supported by a Ramón y Cajal fellowship (RYC2020-029117-I) funded by MICIU/AEI/10.13039/501100011033 and “ESF Investing in your future”. G. Sumbre was supported by ERC CoG 726280.
References
1.https://doi.org/10.1371/journal.pcbi.1000846
2.https://doi.org/10.1523/JNEUROSCI.5990-11.2012
3.https://doi.org/10.1523/JNEUROSCI.4637-10.2011
4.https://doi.org/10.1371/journal.pcbi.1008884
5.https://doi.org/10.1126/sciadv.adj9303
6.https://doi.org/10.1088/0954-898X_6_2_001
7.https://doi.org/10.1126/science.274.5293.1724
8.https://doi.org/10.1016/j.neuron.2011.09.027
9.https://doi.org/10.1177/1073858412445487
10.https://doi.org/0.1523/JNEUROSCI.3864-09.2009
11.https://doi.org/10.1016/j.neuron.2018.10.045
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P121: Effects of the nonlinearity, kinetics and size of the gap junction connections on the transient dynamics of coupled glial cells
Monday July 7, 2025 16:20 - 18:20 CEST
P121 Effects of the nonlinearity, kinetics and size of the gap junction connections on the transient dynamics of coupled glial cells

Predrag Janjic*1, Dimitar Solev1, Ljupco Kocarev 1

1Research Center for Computer Science and Information Technologies, Macedonian Academy of Sciences and Arts, Skopje, North Macedonia

*Email: predrag.a.janjic@gmail.com
Introduction- The complex structure of massive couplings among the glial cells is not fully neurobiologically resolved, preventing realistic quantitative models. Despite the ongoing ultrastructural studies and elucidating published research focusing on glia[1] we can't extract statistical data on the number of gap junction (GJ) connections and their size. The nonlinear dependence of GJ conductance on the transjunctional voltage Vj, slow kinetics, and the GJ size-effect[2] on junction polarization suggest a rich repertoire of transient dynamic instabilities of resting glia when invaded by spreading depolarizations. Known limitations of the glial electrophysiology in-situ to measure GJ-coupled cells warrant qualifying suitable models.

Methods- We introduce a detailed point model of a coupled astrocytic cell - including several currents in the membrane kinetics and the nonlinear coupling with inactivation kinetics. Using the paradigm of a single active site in 1-d array of coupled astrocytes the main focus was on describing the bifurcations of the resting voltage Vr in the inner cell. Timescale separation allowed simplifying assumptions that enable formulating an ODE model of a "self-coupled cell", SCC. For stability analysis of such a model the 2nd cell is connected to a depolarized immediate neighbor on one-side, and a still quiet cell on the other side, both represented as fixed voltages Vdr and Vr. The numerical simulations were done on connected 1-d array.
Results- We explored the stability of the SCC in case of altered steady-state I-V curve, displaying N-shaped nonlinearity and generically present saddle-node (S-N) structure[3]. Newly introduced RMP is markedly more depolarized. The separate N-shaped nonlinearity introduced by the coupling enriched the S-N structure in all parameter perturbations. Typical cases were appearances of (a) fold limit cycle window within the range of fold curve, accompanied by a noise-induced bistability switching, or (b) a stable limit-cycle in the moderate coupling strength range. Not all of the observed dynamical regimes in the SCC survive in numerical simulation of an 1-d array, but in all cases we observed traveling front for the corresponding parameters.
Discussion- Emerging evidence from voltage imaging suggests that astrocytes do not respond dynamically as a homogeneous compartment, displaying strong variations in their depolarization between the collateral processes, or when compared to the cell body. In case of altered I-V curves they are generically prone to multistability. We observed enriched multistability scenarios in the passive response of GJ-coupled astrocytes under very basic conditions. We believe it motivates adding additional level of biophysical detail to the GJ connections and the topology of glial networks. Such groundwork is needed to extend the glial models with the advanced dynamical features of neuromodulation of their glutamate and GABA transporters and receptors.



Acknowledgements
The authors are grateful for the experimental recordings from isolated astrocytes shared by Prof. Christian Steinheauser, form the Institute of Cellular Sciences (IZN), School of Medicine, University of Bonn, Germany. PJ and DS were partially funded by R01MH125030 from the National Institute of Mental Health in US.
References
1. Aten,S., et al,(2022) Ultrastructural view of astrocyte arborization, astrocyte-astrocyte and astrocyte-synapse contacts, intracellular vesicle-like structures, and mitochondrial network, Prog Neurobiol, (213), 102264.
2. Wilders,R., and Jongsma,H.J.,(1992) Limitations of the dual voltage clamp method in assaying conductance and kinetics of gap junction channels, Biophys J 63(4), 942-953.
3. Janjic,P., Solev,D., and Kocarev,L.,(2023) Non-trivial dynamics in a model of glial membrane voltage driven by open potassium pores, Biophys J 122(8), 1470-1490.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P122: Encoding visual familiarity for navigation in a mushroom body SNN trained on ant-perspective views
Monday July 7, 2025 16:20 - 18:20 CEST
P122 Encoding visual familiarity for navigation in a mushroom body SNN trained on ant-perspective views

Oluwaseyi Oladipupo Jesusanmi1,2, Amany Azevedo Amin2, Paul Graham1,2, Thomas Nowotny2
1Sussex Neuroscience, University of Sussex, Brighton, United Kingdom
2Sussex AI, University of Sussex, Brighton, United Kingdom
*Email: o.jesusanmi@sussex.ac.uk

Introduction

Ants can learn long visually guided routes with limited neural resources, mediated by the mushroom body brain region acting as a familiarity detector[1,2]. In the mushroom body, low dimensional input from sensory regions is projected into a large population of neurons, producing sparse representations of input information. These representations are learned via an anti-Hebbian process, modulated through dopaminergic learning signals. In navigation, the mushroom bodies guide ants to seek similar views to those previously learned on a foraging route. Here, we further investigate the role of mushroom bodies in ants’ visual navigation with a spiking neural network (SNN) model and 1:1 virtual recreations of ant visual experiences.
Methods
We implemented the SNN model in GeNN[3]. It has 320 Visual Projection Neurons (VPNs), 20000 Kenyon Cells (KCs), one Inhibitory Feedback Neuron (IFN) and one Mushroom Body Output Neuron (MBON). We used Deeplabcut to track ant trajectories in behavioural experiments. We used phone camera input into Neural Radiance Field (NeRF) and photogrammetry software for environment reconstruction. We used Isaac Sim and NVIDIA Omniverse to recreate views along ants’ movement trajectories from the perspective of the ants. We trained the SNN and comparator models (perfect memory and infomax[4]) on these recreations. We modelled inference across all traversable areas of the environment to test each model’s ability to encode navigational information.
Results
We produced familiarity landscapes for our SNN mushroom body and comparator models, showing differences between how they encode off-route (unlearned) locations. The mushroom body model produced navigation accuracy comparable to the other models. We found that the mushroom body model activity was able to explain trajectory data in trials where ants reached the target location. We found some views resulting in high familiarity did not appear in the training set. These views have similar image statistics to images in the training set, even if the view is from a different place in the environment. We found that ant trajectory routes with higher rates of oscillation improved learning, “filling-in” more of the familiarity landscapes.
Discussion
How the mushroom body would respond across all locations in a traversable environment is not known and is normally not feasible to study. Neural recording in ants remains difficult, and there are limited methods to have an ant systematically experience an entire experimental arena. We addressed this issue via simulation of biologically plausible neural activity while having exact control of what the model sees. Visual navigation models have been compared with mushroom body models in terms of navigation accuracy, but the familiarity landscape produced by the varied models has not been compared. Our investigation provides insights into how encoding of familiarity differs and leads to accurate navigation between models.



Acknowledgements.
References
● https://doi.org/10.1016/J.CUB.2020.07.013
● https://doi.org/10.1016/J.CUB.2020.06.030
● https://doi.org/10.1038/srep18854
● https://doi.org/10.1162/isal_a_00645


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P123: Innovative Strategies to Balance Speed and Accuracy in P300-ERP Detection for Enhanced Online Brain-Computer Interfaces
Monday July 7, 2025 16:20 - 18:20 CEST
P123 Innovative Strategies to Balance Speed and Accuracy in P300-ERP Detection for Enhanced Online Brain-Computer Interfaces

Javier Jiménez*1, Francisco B. Rodríguez1

1Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain
*Email: javier.jimenez01@uam.es

Introduction

Brain-Computer Interfaces (BCIs) interpret signals the brain generates to control devices. These signals can be related to known Event-Related Potentials (ERPs) registered with neuroimaging methods such as electroencephalography [1]. However, ERP detection requires many trials due to its low signal-to-noise ratio [2]. This detection method leads to the well-known speed-accuracy trade-off [3], as each trial adds to the time required for evoking ERPs. We propose a methodology for analyzing this trade-off using two new measures to find the best number of trials for the required accuracy. Finally, these measures were assessed using a P300-ERP dataset [4] to explore their potential as additional early-stopping methods in future online BCI setups.
Methods
In the literature, the speed-accuracy trade-off is usually studied by the employment of BCI measures such as the Information-Transfer Rate (ITR) [5]. However, these measures combine speed and accuracy within a single measure hindering BCI’s evaluation of such speed and accuracy separately. Considering these two concepts may be of interest to BCI users who would be able to decide whether they prefer a fast or accurate BCI in different scenarios. This work introduces two measures called Gain and Conservation which consider the amount of saved time and preserved accuracy, respectively, against a baseline BCI employing a Bayesian Linear Discriminant Analysis (BLDA) classifier to detect P300s.
Results
The new measures were tested against Hoffmann et. al. dataset [4] employing a BLDA classifier to detect P300s in combination with a traditional fixed-stop strategy based upon stopping after a fixed number of trials to evaluate the speed and accuracy of BCIs. For this paradigm, the expected behaviour of these measures would be to follow the speed-accuracy trade-off i.e. faster BCIs would correspond with inaccurate BCIs and vice-versa. This is because faster BCIs employ fewer trials and therefore have access to less information leading to worse P300 detection performances. Such behaviour can be seen in (Fig. 1) where the speed and accuracy of a BCI are represented by the Gain and Conservation, respectively.
Discussion
The described framework proposes two measures capable of evaluating BCIs’ speed and accuracy separately in contrast with other measures such as the ITR [5]. With these new measures, designers and users are provided with a controllable way to optimize BCIs towards different goals prioritizing one measurement over the other under demand. Furthermore, employing these measures offers detailed insights into the behaviors of different BCIs and early-stopping strategies [3] among other applications. To conclude, these measurements can be tracked during the BCI operation, which represents a key future direction of this work:leveraging the speed-accuracy trade-off of BCIs online.



Figure 1. Figure 1: Pseudo-online evolution along different trials from Hoffmann et. al. [4] of normalized Gain and Conservation measures for a fixed-stop strategy compared against its ITR.
Acknowledgements
This work was supported by the Predoctoral Research Grants of the Universidad Autónoma de Madrid (FPI-UAM) and by PID2023-149669NB-I00 (MCIN/AEI and ERDF – “A way of making Europe”).
References
1. 10.1016/0013-4694(88)90149-6
2. 10.1016/j.neuroimage.2010.06.048
3. 10.1088/1741-2560/10/3/036025
4. 10.1016/j.jneumeth.2007.03.005
5. 10.1016/S1388-2457(02)00057-3
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P124: Computational Prediction and Empirical Validation of Enhanced LTP Effects with Gentle iTBS Protocols
Monday July 7, 2025 16:20 - 18:20 CEST
P124 Computational Prediction and Empirical Validation of Enhanced LTP Effects with Gentle iTBS Protocols

Kevin Kadak*1,2, Davide Momi1, Zheng Wang1, Sorenza P. Bastiaens1,2, Mohammad P. Oveisi1,3, Taha Morshedzadeh1,2, Minarose Ismail1,4, Jan Fousek5, and John D. Griffiths1,2,6
1Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto.
2Institute of Medical Sciences, University of Toronto
3Institute of Biomaterials and Biomedical Engineering, University of Toronto
4Department of Physiology, University of Toronto
5Central European Institute of Technology, Czech Republic
6Department of Psychiatry, University of Toronto
*Email: kevin.kadak@mail.utoronto.ca
Introduction

TMS is an established neuromodulatory technique for inducing and assessing cortical excitability changes. Intermittent theta-burst stimulation (iTBS), mimicking endogenous neural activity, yields clinical efficacy comparable to traditional protocols but with significantly shorter treatment durations [1,2]. Despite widespread use for depression treatments, iTBS suffers from high inter-subject response variability. We developed a computational model integrating calcium-dependent plasticity within corticothalamic circuitry to predict optimal iTBS parameters, subsequently validating these predictions through empirical testing of motor-evoked potentials (MEPs) across novel and canonical protocols.

Methods
Our computational model simulated iTBS-induced plasticity effects following 600 pulses in corticothalamic circuitry by varying pulse-burst ratios and inter-burst frequency parameters. We then conducted a mixed-measure experimental paradigm testing standard (Protocol A) and four novel iTBS protocols (B-E; 2-5 pulse-burst, 3-7 Hz). MEPs were recorded pre-stimulation (PRE) and post-stimulation (POST1, POST2) to assess induced plasticity effects. Mixed-effects modelling was performed to analyze group-level effects and response rates.
Results
Our model predicted that gentler stimulation protocols characterized by lower pulse-burst ratios and targeted inter-burst frequencies would maximize long-term potentiation (LTP) effects while reducing response variance. Empirical results confirmed these predictions, with Protocol C (3 pulses/burst, 3 Hz) capturing the highest response rate (60% vs 47% for standard iTBS) and Protocol B (2 pulses/burst, 5 Hz) driving the strongest LTP effects among responders. Notably, protocols with frequencies aligned to participants' alpha subharmonics further modulated plasticity effects in Protocol B, while higher-frequency protocols (Protocol D, 7 Hz) initially induced LTD, which later inverted to LTP.
Discussion
Our findings demonstrate that gentler protocols outperform standard iTBS in driving consistent LTP effects, with efficacy further modulated by resonance between stimulation frequency and endogenous alpha subharmonics. This research highlights an important mechanistic basis for induced plasticity effects pertaining to protocol intensity whereby lower intensity protocols appear to better engage neuroplasticity mechanisms and mitigate metaplastic saturation. We provide a mechanistic framework and empirical validation for enhancing LTP protocols and improving clinical outcomes in TMS treatments for neuropsychiatric disorders.



Acknowledgements
N/A
References

doi: 10.1016/j.biopsych.2007.01.018

doi: 10.1016/S0140-6736(18)30295-2




Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P125: Modeling the biophysics of computation in the outer plexiform layer of the mouse retina
Monday July 7, 2025 16:20 - 18:20 CEST
P125 Modeling the biophysics of computation in the outer plexiform layer of the mouse retina

Kyra L. Kadhim*1, 2, Ziwei Huang1, Michael Deistler2, 3, Jonas Beck1, 2, Jakob H. Macke1, 2, 3, Thomas Euler4, Philipp Berens1, 2


1Hertie Institute for AI in Brain Health, University of Tübingen,Tübingen,Germany
2Tübingen AI Center,University of Tübingen,Tübingen, Germany
3Machine Learning in Science, University of Tübingen,Tübingen, Germany
4Institute for Ophthalmic Research, University of Tübingen,Tübingen,Germany

*Email: kyra.kadhim@uni-tuebingen.de
Introduction

The outer retina is a complex system of neurons that processes visual information, and it is experimentally accessible for the collection of multimodal data. What makes this system complex and nonlinear are mechanisms such as the phototransduction cascade, specialized ion channels, ephaptic feedback mechanisms, and the ribbon synapse [1]. These mechanisms are not typically included in network models that either fit neural data or perform tasks. In particular, optimizing the parameters of such models is computationally challenging with current modelling approaches, which do not include gradient-based optimization methods. However, ignoring such mechanisms limits the ability to capture the computations performed by the retina.
Methods
We developed a fully-differentiable, biophysically-detailed model of the outer plexiform layer of the mouse retina and optimized its parameters with gradient descent. We implemented our model using the new software library Jaxley [2] which inherits functionality from the state of the art machine learning library JAX. In our model, we have so far implemented the phototransduction cascade [3], ion channels [4], and ribbon synapse [5], and we fit their parameters to electrophysiology and neurotransmitter release data. We then optimized the synaptic conductances of the model to classify images with different contrast levels and global luminance levels and analyzed the trained parameter parameter distributions.
Results
We successfully trained our model of a single photoreceptor with gradient descent and found phototransduction cascade parameters that fit the electrophysiology data from Chen and colleagues [3], as well as parameters of the ribbon synapse model that fit glutamate release data from Szatko, Korympidou, and colleagues [6]. We then built a network of photoreceptors with these trained parameters and a horizontal cell, and we trained the network’s 200 synaptic conductances to classify images in which contrast and global luminance levels were distorted. The model was able to classify these images despite these distortions, providing further evidence that the structure of the outer retina facilitates contrast normalization.
Discussion
Biophysical models are capable of implementing a variety of computations that are often attributed to larger neural networks higher in the sensory processing hierarchy. For instance, the fitted model of the phototransduction cascade enables a layer of photoreceptors to adapt to drastically different global luminance levels [3] while at the same time regulating glutamate release consistent with data. Our model, fit to multimodal data, can also classify images with different contrasts using very few trainable parameters. This small but biophysically-inspired network may support many other computations as well, broadening our appreciation of the outer retina.



Acknowledgements
Hertie Stiftung, DFG, ERC Starting Grants NextMechMod and DeepCoMechTome)
References
● https://doi.org/10.1016/C2019-0-00467-0
● https://doi.org/10.1101/2024.08.21.608979
● https://doi.org/10.7554/eLife.93795.1
● https://doi.org/10.1016/j.visres.2009.03.003
● https://doi.org/10.7554/eLife.67851
● https://doi.org/10.1038/s41467-020-17113-8


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P126: Predictive Coding in the Drosphila Optic Lobe
Monday July 7, 2025 16:20 - 18:20 CEST
P126 Predictive Coding in the Drosphila Optic Lobe

Rintaro Kai*1, Naoya Nishiura*1, Keisuke Toyoda*1, Masataka Watanabe*1
1The University of Tokyo
Introduction

In recent years, the complete connectome of the fruit fly has been revealed [1], and its estimation of synaptic efficacy via backprogation training has lead to the reconstruction of T4/T5 motion selective cells [2]. However, in this particular study [2], biologically non-available optical flow was provided as a vector teaching signal. In this study, we used the complete connectome of the fruit fly and implemented Predictive Coding [3] by calculating the error between two tightly coupled cells,namely, L1 and C3. The results demonstrate the potential of training the full connectome neural circtuitry only using biological available vector teaching signals, say, the sensory input itself.
Methods
From the FlyWire dataset, we extracted connectivity information for neurons and 2,700,000 synapses in the right optic lobe and created a single-layer RNN and the neurotransmitters present at each synapse. The output function of each neuron was clipped, and the weights were normalized per post-neuron. Photoreceptor neurons received simulated natural video stimuli based on the shape of the fruit fly's eyes, then stimuli propagated to next neurons with each timestep. The network was trained using the mean squared error of outputs from anatomically close L1 and C3 neurons, creating a simple autoencoder using Predictive Coding [3]. Additionally, the activity of neurons at each timestep was visualized to ensure appropriate behavior.
Results
The learning of the task was successful, and the error converged to a very low value. Neurons other than those used for error calculation also showed appropriate activity, indicating that the network functioned effectively as a whole. Parameters were tuned effectively for the modeling settings, as phenomena where neuron outputs become constant regardless of input were also observed depending on the parameters.

Discussion
The results of this study show that it is possible to perform unsupervised learning on the full connectome by taking errors between pairs of neurons, without incorporating artificial neurons or circuits.Future prospects include verifying whether the neuronal activity of the obtained model is biologically valid, examining the biological significance of hyperparameters, and testing whether network behavior and neuron role distribution can be robustly replicated compared to random initialization.



Acknowledgements
This work has been supported by the Mohammed bin Salman Center for Future Science and Technology for Saudi-Japan Vision 2030 at The University of Tokyo (MbSC2030) and JSPS KAKENHI Grant Number 23K25257.
References
[1] Dorkenwald, Sven et al. (2024). Neuronal wiring diagram of an adult brain. Nature, 634(8032), 124-138.
[2] Lappalainen, Janne K. et al. (2024). Connectome-constrained networks predict neural activity across the fly visual system. Nature, 634(8036), 1132-1140.
[3] Rao, Rajesh P. N. & Ballard, Dana H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79-87.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P127: Disentangling Temporal and Amplitude-Driven Contributions to Signal Complexity
Monday July 7, 2025 16:20 - 18:20 CEST
P127 Disentangling Temporal and Amplitude-Driven Contributions to Signal Complexity

Sara Kamali¹,Fabiano Baroni¹, Pablo Varona¹


¹ Department of Computer Engineering, Autonomous University of Madrid, Madrid, Spain


*Email: sara.kamali@uam.es
Introduction

Quantifying complexity in biomedical signals is crucial for physiological and pathological analysis. Entropy-based methods, like Shannon [1], approximate entropy [2], and sample entropy (SampEn) [3] quantify unpredictability. Some approaches, including increment-based methods [4, 5], capture entropies from amplitude variations. Existing methods, however, do not distinguish complexity derived from temporal dynamics versus amplitude fluctuations. This limitation restricts insights into the dynamical evolution of signals We introduce Extrema-Segmented Entropy (ExSEnt), an entropy-based framework that independently analyzes temporal and amplitude components, enhancing understanding of underlying dynamics.
Methods
We segmented the time series based on extrema, each segment starts at the data point after the current extremum and ends at the next extremum. Two key features were extracted per segment: duration, representing the temporal length, and net amplitude, reflecting the overall signal variation. We then computed SampEn for each feature separately, as well as their joint bivariant entropy, to assess whether they provide independent or correlated information. This approach helps determine whether complexity arises primarily from temporal dynamics or amplitude variations. Our method enhances the understanding of how different factors drive signal complexity.
Results
Application of ExSEnt on synthetic data revealed the ability of the metrics to distin- guish between different random signals, i.e., Gaussian noise, pink noise, and Brownian motion. We also evaluated the complexity of well-known dynamical systems, such as the Rulkov neuron model, where ExSEnt successfully differentiated between different dy- namical regimes. Evaluation of electromyography (EMG) signals during a motor task revealed that movement intervals exhibit lower amplitude complexity but relatively sta- ble temporal entropy compared to the baseline. A strong linear correlation was observed between amplitude ExSEnt and joint ExSEnt, suggesting that amplitude variations are the primary contributors to the joint amplitude-temporal EMG complexity.
Discussion
The ExSEnt framework offers a precise and systematic approach to quantifying tempo- ral and amplitude-driven contributions to complexity, providing a novel perspective for biomedical and neuronal signal analysis. Applying ExSEnt to neural data demonstrates its potential to reveal hidden dependencies between duration and amplitude fluctuations, pro- viding a detailed complexity profile. This approach aids in quantifying dynamic changes and identifying complexity sources in neural disorders and physiological states.




Acknowledgements
Work funded by PID2024-155923NB-I00, CPP2023-010818, and PID2021-122347NB-I00.
References
[1] https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
[2] https://doi.org/10.1073/pnas.88.6.2297
[3] https://doi.org/10.1016/S0076-6879(04)84011-4
[4] https://doi.org/10.3390/e20030210
[5] https://doi.org/10.3390/e18010022
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P128: Electrodiffusion and voltage dynamics in the periaxonal space with spatially detailed finite-element simulations
Monday July 7, 2025 16:20 - 18:20 CEST
P128 Electrodiffusion and voltage dynamics in the periaxonal space with spatially detailed finite-element simulations

Tim M. Kamsma*1,2, R. van Roij1, Maarten H.P. Kole3,4

1Institute for Theoretical Physics, Utrecht University, Utrecht, The Netherlands
2Mathematical Institute, Utrecht University, Utrecht, The Netherlands
3Department of Axonal Signalling, Netherlands Institute for Neuroscience, an Institute of the Royal Netherlands Academy of Arts and Sciences, Amsterdam, The Netherlands
4Cell biology, Neurobiology and Biophysics, Department of Biology, Faculty of Science, Utrecht University, Utrecht, The Netherlands


*Email: t.m.kamsma@uu.nl
Introduction

The submyelin, or periaxonal, space was long considered to be an inert region of the internode. This view has been revised over recent years, as evidence accumulated that the periaxonal region plays important roles in both the electrical saltatory conduction of the action potential [1] and in chemical axo-myelinic cell signalling [2]. The nanoscale dimensions of the periaxonal space makes experimental investigations into the electrochemical dynamics extremely challenging. Traditional cable theory models, though informative for electrical properties [1], do not provide the spatial resolution nor the appropriate ionic transport equations to resolve the complex electrodiffusion profiles inherent to such highly confined geometries.


Methods
To investigate the electrochemical dynamics of axon-myelin spaces, we developed a computational model that employs detailed finite-element simulations to numerically resolve first-principles ion transport equations within a biologically accurate geometry of a myelinated axon. Membrane currents were implemented through standard Hodgkin-Huxley-like voltage-dependent ion channel equations, while outside of the membrane all concentrations and voltages were fully governed by the Poisson-Nernst-Planck equations. These coupled physical equations were numerically solved with the software package COMSOL. The results were compared to traditional simulations using a double-cable model of the NEURON software package.

Results
Our computational model autonomously generated biophysically accurate action potentials and spatially resolved all ionic and voltage dynamics. Without clearance mechanisms, periaxonal potassium accumulation of up to ~10 mM was predicted for a single action potential. Consequently, we investigated and revealed possible potassium clearance pathways via the oligodendrocyte-myelin complex. More generally, as all physical quantities are fully resolved with high spatial resolution, this model can flexibly provide other desired insights within the entire modelled geometry. Furthermore, molecular transport, chemical reactions, and fluid flow can be coupled to the same model, which therefore can serve as a versatile platform for future expansions.

Discussion
Although our simulations can probe regions that are experimentally difficult to access, the model still required parameter inputs and is therefore also constrained by the limited experimental data. Future simulations and biological 3D EM data will need to advance in tandem to fully investigate the dynamics in this region. The geometry of the model assumed a rotational symmetry, which considerably simplified the model, but is not entirely biologically accurate. Lastly, we did not resolve the physics within membranes, as this requires molecular scale simulations. However, since phenomenological Hodgkin-Huxley-like membrane current equations are well-tested, we expect that the modelled ionic fluxes are quantitatively accurate.





Acknowledgements
This work was supported by the Science for Sustainability Graduate Programme of Utrecht University.
References
1.Cohen, C. C., Popovic, M. A., Klooster, J., Weil, M. T., Möbius, W., Nave, K. A., & Kole, M. H. (2020).Saltatory conduction along myelinated axons involves a periaxonal nanocircuit.Cell,180(2), 311-322.https://doi.org/10.1016/j.cell.2019.11.039

2.Micu, I., Plemel, J. R., Caprariello, A. V., Nave, K. A., & Stys, P. K. (2018).Axo-myelinic neurotransmission: a novel mode of cell signalling in the central nervous system.Nature Reviews Neuroscience,19(1), 49-58.https://doi.org/10.1038/nrn.2017.128
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P129: Of mice and men: Dendritic architecture differentiates human from mice neuronal networks
Monday July 7, 2025 16:20 - 18:20 CEST
P129 Of mice and men: Dendritic architecture differentiates human from mice neuronal networks

Lida Kanari∗1, Ying Shi1,5, Alexis Arnaudon1, Natalı Barros-Zulaica1, Ruth Benavides-Piccione2, Jay S. Coggan1, Javier DeFelipe2, Kathryn Hess3, Huib D. Mansvelder4, Eline J. Mertens4, Julie Meystre5, Rodrigo de Campos Perin5, Maurizio Pezzoli5, Roy T. Daniel6, Ron Stoop7, Idan Segev8, Henry Markram1and Christiaan P.J. de Kock4

1Blue Brain Project, Ecole Polytechnique Federale de Lausanne (EPFL), Geneva, Switzerland.
2Laboratorio Cajal de Circuitos Corticales, Universidad Politecnica de Madrid and Instituto Cajal (CSIC), Madrid, Spain
3Laboratory for Topology and Neuroscience, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
4Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
5Laboratory of Neural Microcircuitry, ´Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
6Department of Clinical Neurosciences, Neurosurgery Unit, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland
7Center for Psychiatric Neurosciences, Department of Psychiatry, Lausanne University Hospital Center, Lausanne, Switzerland
8Department of Neurobiology and Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel


* Email: lida.kanari@gmail.com

Introduction
The organizational principles that distinguish the human brain from other species have been a long-standing enigma in neuroscience. Numerous studies have investigated the correlations between intelligence and neuronal density [1], cortical thickness [2], gyrification [3], and dendritic architecture [4]. However, despite extensive endeavors to unravel its mysteries, numerous aspects of our unique characteristics remain elusive. Along several other factors that contribute in human intelligence, in this study [5] we demonstrate that the shapes of dendrites are an important indicator of network complexity that cannot be disregarded in our quest to identify what makes us human.


Results
Using experimental pyramidal cell reconstructions [6], we built representative mouse and human cortical networks (Fig. 1). We integrate experimental data, taking into account the lower cell density in human cortex layers 2 and 3 [7, 8] and the greater interneuron percentages in the human cortex [9]. Human pyramidal cells form highly complex networks (Fig. 1C), demonstrated by the increased number and simplex dimension compared to mice. Simple dendritic scaling cannot explain species-specific connectivity differences. Topological comparison of dendritic structure reveals much higher perisomatic (basal and oblique) branching density in human pyramidal cells (Fig1. D), impacting network complexity.

Methods
The Topological Morphology Descriptor [10] represents the neuronal morphology as a persistence barcode, using topological data analysis to characterize the shapes of neurons. Scaling transformations were analyzed to compare mouse and human neurons, with optimization via gradient descent. The connectivity was computed using computational modeling of the cortical layers 2 and 3 [11], and approximates the set of potential connections in mouse and human cortex. Memory capacity was analyzed based on dendritic processing models [12].

Discussion
Despite lower neuronal density, human pyramidal cells establish higher-order interactions via their distinct dendritic topology, forming complex networks. This enhanced connectivity is supported by interneurons, which maintain excitation-inhibition balance. The increased dendritic complexity of human pyramidal cells correlates with increased memory capacity, suggesting its role in computational efficiency. Rather than increasing neuron count, human brains prioritize single-neuron complexity to optimize network function. Our findings highlight dendritic morphology as a key determinant of network performance, shaping cognition and future research directions.



Figure 1. Fig1: Multiscale comparison of mouse and human brains, from brain regions to single neurons (A). Greater network complexity (C) emerges in human networks despite the lower neuron density (B), correlating with the higher dendritic complexity of human pyramidal cells. Our findings suggest that dendritic complexity (D) is more substantial for network complexity than neuron density.
Acknowledgements
BBP, EPFL, by ETH Board by SFIT. H.D.M. and C.d.K. by grant awards U01MH114812, UM1MH130981-01 from NIMH, grant no. 945539 (HBP SGA3) Horizon 2020 Framework, NWO 024.004.012, ENW-M2, OCENW.M20.285. R.S. by SNSF (IZLSZ3\_148803, IZLIZ3\_200297, IZLCZ0_206045, 31003A_138526) and Synapsis Foundation (2020-PI02). J.D.F. and R.B.P. by PID2021-127924NB-I00(MCIN/AEI/10.13039/501100011033).


References
[1]https://doi.org/10.3389/neuro.09.031.2009
[2]https://doi.org/10.1016/j.intell.2013.07.010
[3]http://dx.doi.org/10.1016/j.cub.2016.03.021
[4]https://doi.org/10.1016/j.tics.2022.08.012
[5]https://doi.org/10.1101/2023.09.11.557170
[6]https://doi.org/10.1093/cercor/bhv188
[7]https://doi.org/10.1023/A:1024130211265
[8]https://doi.org/10.1023/a:1024134312173
[9]https://doi.org/10.1126/science.abo0924
[10]https://doi.org/10.1007/s12021-017-9341-1
[11]https://doi.org/10.1016/j.cell.2015.09.029
[12]https://doi.org/10.1016/S0896-6273(01)00252-5


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P130: Manifold Inference by Maximising Information: Hypothesis-driven extraction of CA1 neural manifolds via information theory
Monday July 7, 2025 16:20 - 18:20 CEST
P130 Manifold Inference by Maximising Information: Hypothesis-driven extraction of CA1 neural manifolds via information theory

Michael G. Kareithi*1, Mary Ann Go 1, Pier Luigi Dragotti 2 Simon R. Schultz1

1 Department of Bioengineering, Imperial College London, London, United Kingdom
2Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom

*Email: m.kareithi21@imperial.ac.uk

Introduction
Neural manifolds have been a useful concept for understanding cognition, with recent work showing the importance of "hypothesis-driven" analyses: linking behaviour with manifolds via supervised manifold learning [1]. Linear dimensionality reduction methods are easier to interpret than their nonlinear counterparts, but often can only detect linear correlations in neural activity. From an information-theory perspective, a natural approach to supervised manifolds is to maximise Mutual Information between the embedding and the target variable. We use simple linear embeddings with an information-theoretic objective function: Quadratic Mutual Information [2], and apply it as a tool for hypothesis-driven manifold learning in mouse hippocampus.
Methods
Quadratic Mutual Information (QMI) is derived from Renyi entropy and divergence, a broader family of measures than Shannon entropy and mutual information (MI), the latter being a special case of the former. Like MI, QMI has the desirable property of being zero if and only if the variables are independent. Its advantage is that it can be estimated with high-dimensional data, and is differentiable. We fit a linear projection from activity to a lower-dimensional subspace by maximising QMI between the projection and a target variable. We call our framework Manifold Inference by Maximising Information (MIMI). We apply MIMI to two-photon calcium recordings in mouse CA1 during a 1D running task.
Results
In our dataset, mice run continuously along a circular track [3]. We fit MIMI on calcium fluorescence activity, with the animal's angular position as target variable, cross-validating with a 75%-25% train-test split. In four out of eight mice we find the majority of position-information in a 2-3 dimensional subspace (Fig 1.a). In the sessions without informative subspaces, the linear decodability of position is low even from the full population activity, indicating the absence of a population code (Fig 1.e). The informative subspaces contain ring-shaped manifolds mapping continuously onto the animal's physical coordinates (Fig 1.f).
Discussion
Combining information-theoretic measures with linear embeddings is a useful idea for analysing populations, where our aim is not only to find manifold structure, but to understand how cell assemblies coordinate to sculpt it. MIMI shows that we can find behaviourally-informative manifolds without nonlinear embeddings: only a nonlinear measure of dependence. Downstream analysis can then pose questions about representation by examining the linear transformation weights: for example, asking if two variables are represented orthogonally. We believe MIMI will be a useful framework for interpretable, hypothesis-driven manifold analysis.




Figure 1. A) Explained position variance (R-squared of ridge-regressor, left) and Mutual Information (right) between position and MIMI projection at different dimensionalities. Each line is an individual mouse. B) Position-variance explained by full population vs by MIMI subspace. C) Activity in MIMI subspace for four mice with informative subspaces, coloured by associated position of mouse.
Acknowledgements-
References
1.https://doi.org/10.1038/s41586-023-06031-6
2.https://doi.org/10.1007/978-1-4419-1570-2_2
3.https://doi.org/10.3389/fncel.2021.618658
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P131: Personalized Computational Models for Selective and Impact-Driven Brain Stimulation
Monday July 7, 2025 16:20 - 18:20 CEST
P131 Personalized Computational Models for Selective and Impact-Driven Brain Stimulation

Fariba Karimi*1,2,Taylor Newton1,Melanie Steiner1, Antonino Cassara1,Niels Kuster1,2 ,Esra Neufeld1



1IT’IS Foundation, Zürich, Switzerland
2Swiss Federal Institute of Technology (ETH Zurich), Zürich, Switzerland
3Ecole Polytechnique Fédérale de Lausanne (EPFL), Geneva and Sion, Switzerland
4Clinical Neuroscience, University Medical School of Geneva, Geneva, Switzerland


*Email: karimi@itis.swiss

Introduction

Non-invasive brain stimulation (NIBS) offers promising therapeutic avenues for a range of neurological conditions. However, inter-subject response variability remains an important challenge, often limiting its widespread clinical adoption. Here, we present a computational pipeline designed to optimize NIBS by harnessing personalized brain network dynamics modeling, towards enhancing both effictivity and predictability of therapeutic outcomes.

Methods
We developed a comprehensive pipeline on the o2S2PARC platform (see Fig. 1). The pipeline utilizes MRI and diffusion-weighted imaging (DWI) data to construct detailed head models (>40 distinct tissue types) through AI segmentation, perform electromagnetic (EM) simulations to determine exposure-induced electric fields and personalized lead field matrices, and predict the impact of diverse stimulation conditions on brain network dynamics using personalized neural mass models (NMMs; derived from DWI structural connectivity data; simulated using the The Virtual Brain (TVB) [1] framework). The brain network modeling combined with the personalized lead fields permit to synthetize virtual EEG signals that can be compared with measurable data.
Results
Using the developed pipeline, we implemented a temporal interference stimulation planning (TIP) tool for optimizing electrode locations for temporal interference stimulation (TIS, a recently introduced transcranial electric stimulation method capable of targeted stimulation at depth). Demonstration applications of our pipeline predicted shifts in EEG spectral responses following transcranial alternating current stimulation (tACS) in accordance with theoretical and empirical data. Additionally, our simulations revealed dynamic fluctuations of inter-hemispheric synchronization in accordance with experimental observations. These results underscore our pipeline's potential in modeling real-world brain responses to NIBS [3].

Discussion
We established a fully automated computational pipeline for personalized NIBS modeling and the optimization of dynamic brain network response predictions. This pipeline underscores the shift from generic exposure-targeting approaches to a personalized, impact-driven (network dynamics) approach, towards improving the efficacy and precision of NIBS therapies. Current research focuses on the continuous inference of improved model parameters based on measurement feedback and model-predictive control. This works lays the groundwork for adaptive and effective brain dynamics modulation for the treatment of complex neurological disorders, marking a significant advance in the personalized medicine landscape [3].






Figure 1. Figure 1: Schematic representation of the developed pipeline on the o2S2PARC platform
Acknowledgements
--
References
1.https://doi.org/10.1016/j.neuroimage.2015.01.002
2.https://doi.org/10.1109/TNSRE.2012.2200046
3.https://doi.org/10.1088/1741-2552/adb88f
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P132: Brain Symphony: A Transformer-Driven Fusion of fMRI Time Series and Structural Connectivity
Monday July 7, 2025 16:20 - 18:20 CEST
P132 Brain Symphony: A Transformer-Driven Fusion of fMRI Time Series and Structural Connectivity

Moein Khajehnejad*1,2, Adeel Razi1,2,3

1Turner Institute for Brain & Mental Health, Monash University, Melbourne, Australia
2Monash Data Futures Institute, Monash University, Melbourne, Australia
3Wellcome Centre for Human Neuroimaging, University College London, United Kingdom

*Email: moein.khajehnejad@monash.edu


Introduction
Understanding brain function requires integrating multimodal neuroimaging data to capture temporal dynamics and pairwise interactions. We propose a novel foundation model fusing fMRI time series, structural connectivity, and effective connectivity graphs using Dynamic Causal Modeling (DCM) [1] to derive robust, interpretable region-of-interest (ROI) embeddings.Our approach enables robust representation learning that generalizes across datasets and supports downstream tasks such as disease classification or detecting neural alterations induced by psychedelics. Additionally, our model identifies most influential brain regions and time intervals, facilitating interpretability in neuroscience applications.
Methods


Our framework employs two self-supervised encoders. The fMRI encoder utilizes a Spatio-Temporal Transformer to model dynamic ROI embeddings. The connectivity encoder incorporates a Graph Transformer [2] and systematically evaluates multiple advanced graph-based approaches—signed Graph Neural Networks [3], Graph Attention Networks with edge sign awareness [4] and Message Passing Neural Networks with edge-type features [5]—to determine the most effective strategy for capturing excitatory and inhibitory connections for the DCM-derived graphs. To preserve causal semantics, we compare and adapt sign-aware attention and positional encodings using signed Laplacian, random walk differences, and global relational encodings, selecting the most suitable method based on empirical performance. Cross-modal attention integrates the learned embeddings from both encoders, ensuring seamless fusion across modalities. The model is pretrained on the HCP dataset, utilizing both fMRI time series and structural connectivity, and remains adaptable for other datasets incorporating different connectivity measures.
Results
We pretrained the model on 900 HCP participants, testing it on 67 held-out subjects and an independent psilocybin dataset (54 participants) [6]. Fig. 1.a shows accurately reconstructed fMRI time series for a test subject. Fig. 1.b presents reconstructed functional and structural connectivity maps, capturing both dynamic and anatomical relationships. Fig. 1.c visualizes low-dimensional ROI embeddings before and after psilocybin administration, revealing clear shifts only in subjects with high subjective effects (i.e. MEQ scores), indicating the model's ability to capture neural alterations. This dataset was not part of pretraining, emphasizing strong transferability and generalizability.



Discussion
This scalable, interpretable framework advances multimodal integration of fMRI and distinct connectivity representations, enhancing classification and causal insight. Future work will compare diffusion-based structural connectivity with DCM-derived effective connectivity to assess the impact of causal representations on robustness in noisy datasets with latent confounders.






Figure 1. Reconstruction and representation capabilities of the multimodal foundation model. (a) Reconstructed fMRI time series for a test subject, demonstrating model accuracy. (b) Reconstructed functional and structural connectivity maps, capturing dynamic and anatomical relationships. (c) Low-dimensional ROI representations before and after psilocybin with greater shifts in high MEQ subjects.
Acknowledgements
A.R. is affiliated with The Wellcome Centre for Human Neuroimaging, supported by core funding from Wellcome [203147/Z/16/Z]. A.R. is a CIFAR Azrieli Global Scholar in the Brain, Mind & Consciousness Program.
References
[1]https://doi.org/10.1016/S1053-8119(03)00202-7
[2]https://doi.org/10.48550/arXiv.2106.05234
[3]https://doi.org/10.1109/ICDM.2018.00113
[4]https://doi.org/10.48550/arXiv.1710.10903
[5]https://doi.org/10.48550/arXiv.1704.01212
[6]https://doi.org/10.1101/2025.03.09.642197
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P133: A Recursive Stability Model of Qualia: Philosophical Self-reference, Neural Attractor Structures, and Experimental Exploration in LLMs
Monday July 7, 2025 16:20 - 18:20 CEST
P133 A Recursive Stability Model of Qualia: Philosophical Self-reference, Neural Attractor Structures, and Experimental Exploration in LLMs

Chang-Eop Kim
Department of Physiology, College of Korean Medicine, Gachon University, 1342, Seong- namdaero, Seongnam 13120, Republic of Korea

Email:eopchang@gachon.ac.kr

Introduction

Qualia represent a fundamental challenge in consciousness research, defined as inherently subjective experiences that resist objective characterization. Philosophically, qualia have been proposed to possess self-referential characteristics, aligning conceptually with Douglas Hofstadter’s "strange loop" theory, which suggests subjective experience might arise from recursive structures [1]. However, explicit mathematical and empirical formulations of this concept remain scarce.

Methods
We developed a mathematical formalization of qualia using recursive stability, identifying fixed-point states reflecting neural circuits recursively referencing their outputs. Neuroscientific literature was reviewed to identify biological phenomena potentially implementing recursive stability. Additionally, analogous candidate structures were explored within artificial neural networks, particularly focusing on attention mechanisms in large language models (LLMs).
Results
The mathematical formulation effectively captured essential characteristics of subjective conscious experiences, including their inherent immediacy and the necessary equivalence between existence and self-awareness. Neuroscientific literature suggested candidate biological structures, such as hippocampal CA3 attractor networks indirectly supporting self-referential episodic memory, and sustained-activity circuits in prefrontal cortex known for roles in conscious cognition [2,3]. At the cellular level, basic biological feedback loops provided foundational examples of recursive mechanisms. Computationally, Hopfield network-like structures, explicitly self-referential and analogous to Hofstadter's "strange loop," were identified in the attention mechanisms of LLMs, indicating potential attractor-like behaviors and recursive self-reference within these models.

Discussion
This research supports recursive stability as a robust mathematical framework bridging philosophical, neuroscientific, and computational perspectives on qualia. Computational findings suggest LLMs as practical platforms for experimentally exploring self-referential consciousness models. Future research should empirically validate these recursive structures within biological systems and further refine computational implementations to deepen our understanding of consciousness.





Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(RS-2024-00339889).

References
[1] Hofstadter, D. R. (2007). I Am a Strange Loop. Basic Books. ISBN: 978-0465030798.
[2] Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486-492.https://doi.org/10.1126/science.aan8871
[3] Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776-798. https://doi.org/10.1016/j.neuron.2020.01.026
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P134: Sensory Data Observation is not an Instant Mapping
Monday July 7, 2025 16:20 - 18:20 CEST
P134 Sensory Data Observation is not an Instant Mapping

Chang Sub Kim*

Department of Physics, Chonnam National University, Gwangju 61186, Republic of Korea

*Email: cskim@jnu.ac.kr

Introduction

The brain self-supervises its embodied agent's behavior via perception, learning, and planning action. Researchers have lately accommodated computational algorithms such as error backpropagation [1] and graphical models [2] to enhance our understanding of how the brain works. The accommodated approaches suit reverse-engineering problems but may not account for real brains. This study aims to provide a biologically plausible theory describing sensory generation, synaptic efficacy, and neural activity as all dynamical processes within a physics-attended framework. We address that sensory observation is generally continuous; therefore, one must handle them appropriately, not as an instant mapping prevalent in Kalman filters [3].


Methods
We formulate a neurophysical theory for the brain's working under the free energy principle (FEP), advocating that the brain minimizes informational free energy (IFE) for autopoietic reasons [4]. We derive the Bayesian mechanics (BM) that actuates IFE minimization and numerically show how the BM performs the minimization. To this end, we must determine the likelihood and prior probabilities in the IFE, which are nonequilibrium physical densities in the biological brain. Using stochastic-thermodynamic methods, we specify them as path probabilities and identify variational IFE as a classical action in analytical mechanics [5]. Subsequently, we apply the principle of least action and obtain the brain's neural equations of functional motion.

Results
Our resulting BM governs the co-evolution of the neural state and momentum variables; the momentum variable represents prediction error in the predictive coding framework [6]. Figure 1 depicts a sensory stream observed in continuous time, contrasting with discrete Kalman emission. We have numerically explored static and time-dependent sensory inputs for various cognitive operations such as passive perception, active inference, and learning synaptic weights. Our results reveal optimal trajectories, manifesting the brain's minimization of the IFE in neural phase space. In addition, we will present the neural circuitries implied by the BM, reflecting a network of neural nodes in the generic cortical column.

Discussion
We argued that sensory data generation is a dynamical process, which we incorporated into our formulation for IFE minimization. Our minimization procedure does not invoke the gradient descent (GD) methods in conventional neural networks but arises naturally from the Hamilton principle. In contrast to quasistatic GD updating, our approach can handle fast, time-varying sensory inputs and provides continuous trajectories of least action, optimizing noisy neuronal dynamics. Furthermore, our theory resolved the issue of the lack of local error representation by revealing the momentum variable as representing local prediction error; we also uncovered its neural equations of motion.





Figure 1. Schematic of sensory data observation. The sensory stream is generally continuous, as depicted in the blue noise curve; the neural response is drawn as the red trajectory, retrodicting the sensory causes in continuous time. In contrast, the prevailing Bayesian filtering in the literature handles sensory observation as a discrete mapping delineated by vertical dashed arrows.
Acknowledgements
Not applicable.
References
● https://doi.org/10.1016/j.tics.2018.12.005
● https://doi.org/10.1016/j.jmp.2021.102632
● http://dx.doi.org/10.1115/1.3662552
● https://doi.org/10.1038/nrn2787
● Landau, L. D., & Lifshitz, E. M. (1976). Mechanics: Course of theoretical physics. Volume 1. 3rd edition. Amsterdam: Elsevier.
● https://doi.org/10.1038/4580


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P135: Neural dynamics of reversal learning in the prefrontal cortex and recurrent neural networks
Monday July 7, 2025 16:20 - 18:20 CEST
P135 Neural dynamics of reversal learning in the prefrontal cortex and recurrent neural networks

Christopher M. Kim*1, Carson C. Chow1, Bruno B. Averbeck2

1Laboratory of Biological Modeling, NIDDK/NIH, Bethesda, MD
2Laboratory of Neuropsychology, NIMH/NIH, Bethesda, MD
3Current address: Department of Mathematics, Howard University, Washington, DC

*Email: christopher.kim@howard.edu

Introduction

In a probabilistic reversal learning task, a subject learns from initial trials that one of the two options yields reward with higher probability than the other (for instance, the high-value and the low-value options are rewarded 70% and 30% of the time, respectively). When the reward probabilities of two options are reversed at a random trial, the agent must switch its choice preference to maximize reward. Such reversal learning has been used for assessing one’s ability to adapt in a dynamically changing environment with uncertain rewards [1]. In this task, reward outcomes must be integrated over multiple trials before reversing the preferred choice, as the less favorable option yields rewards stochastically.


Methods
We investigated how cortical neurons represent integration of decision-related evidence across trials in the reversal learning task. Previous works considered attractor dynamics along a line in the state space as a neural mechanism for evidence integration [2]. However, when integrating evidence across trials, the subject must perform task-related behaviors within each trial, which could induce non-stationary neural activity. To understand the neural representation of multi-trial evidence accumulation, we analyzed the activity of neurons in the prefrontal cortex of monkeys and recurrent neural networks trained to perform a reversal learning task.
Results
We found that, in a neural subspace encoding reversal probability, its activity represented integration of reward outcomes as in a line attractor. The reversal probability activity at the start of a trial was stationary, stable and consistent with the attractor dynamics. However, during the trial, the activity was associated with task-related behavior and became non-stationary, thus deviating from the line attractor. Fitting a predictive model to neural data showed that the stationary state at the trial start served as an initial condition for launching the non-stationary activity. This suggested an extension of the line attractor model with behavior-induced non-stationary dynamics.
Discussion
Our findings show that, when performing a reversal learning task, a cortical circuit represents reversal probability, not only in stable stationary states as in a line attractor model, but also in dynamic neural trajectories that can accommodate non-stationary task-related behaviors necessary for the task. Such neural mechanism demonstrates the temporal flexibility of cortical computation and opens the opportunity for extending existing neural model for evidence accumulation by augmenting temporal dynamics.




Acknowledgements
This research was supported by the Intramural Research Program of the National Institutes of Health: the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) and the National Institute of Mental Health (NIMH). This work utilized the computational resources of the NIH HPC Biowulf cluster (https://hpc.nih.gov).
References
[1] Bartolo, R., & Averbeck, B. B. (2020). Prefrontal cortex predicts state switches during reversal learning.Neuron,106(6), 1044-1054.
[2] Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex.nature,503(7474), 78-84.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P136: Computational Modeling of Ca2+ Blocker Effect in the Thalamocortical Network of Epilepsy: A Dynamic Causal Modeling Study
Monday July 7, 2025 16:20 - 18:20 CEST
P136 Computational Modeling of Ca2+ Blocker Effect in the Thalamocortical Network of Epilepsy: A Dynamic Causal Modeling Study

Euisun Kim1, Jiyoung Kang2, Jinseok Eo3, Hae-Jeong Park*1,3,4,5

¹Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul, Republic of Korea
²Department of Scientific Computing, Pukyong National University, Busan, Republic of Korea
³Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea
4Department of Cognitive Science, Yonsei University, Seoul, Republic of Korea
5Department of Nuclear Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
*1,2 are equally contributed.

*Email: parkhj@yonsei.ac.kr
Introduction

Childhood Absence Epilepsy (CAE) is characterized by excessive thalamocortical synchronization, leading to recurrent loss of consciousness [1]. This phenomenon is linked to T-type calcium channel hyperactivity, a key driver of seizure generation [2]. Ethosuximide (ETX), a T-type Ca²⁺ blocker and first-line CAE treatment, is expected to influence both intrinsic neural property and interregional connectivity, but its mechanism on thalamocortical network hierarchy remain unclear. This study employs Dynamic Causal Modeling (DCM) to analyze ETX-induced network changes from a neuropharmacological perspective [3].


Methods
To examine ETX-induced changes in thalamocortical dynamics, we incorporated voltage dependent calcium channels to a thalamocortical model (TCM) [4]. The model included six cortical populations (pyramidal, interneuron, and stellate cells) and two thalamic populations (reticular and relay neurons) for a thalamocortical system. Their temporal evolution is governed by coupled differential equations, describing membrane potential and conductance changes mediated by AMPA, NMDA, GABA-A receptors, and T-type calcium channels —the latter capturing ETX effects. Resting-state EEG data were collected before and after ETX administration in CAE patients. Using DCM of longitudinal EEG, we analyzed hierarchical thalamocortical connectivity changes and modeled nonlinear interactions influencing EEG cross-spectral density (CSD) within the Default Mode Network (DMN), including the mPFC, Precuneus, and lateral parietal cortices —which is often aberrantly deactivated during CAE seizures, potentially due to subcortical inhibition [5].
Results
ETX significantly altered both thalamocortical and cortical network dynamics. We observed changes in intrinsic neural properties as well as interregional connectivity when comparing pre- and post-ETX conditions. These findings indicate that ETX modulates local neural excitability and large-scale network interactions, thereby contributing to seizure suppression in CAE.

Discussion
By incorporating voltage-dependent Ca²⁺ channels into a thalamocortical model, this study provides a preliminary computational evidence that calcium channel blockers help restore large-scale network stability in CAE. The results underscore the therapeutic mechanism by which these agents modify pathological thalamocortical interactions. Further validation and refinement of the computational model may enhance clinical approaches to treating CAE and related epileptic disorders.





Acknowledgements
This research was supported by the Bio&Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government (MSIT) (No. RS-2024-00401794).
References
● https://doi.org/10.1016/j.nbd.2023.106094
● https://doi.org/10.1111/epi.13962
● https://doi.org/10.1016/j.neuroimage.2023.120161
● https://doi.org/10.1016/j.neuroimage.2020.117189
● 10.3233/BEN-2011-0310


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P137: Coordinated Multi-frequency Oscillatory Bursts Enable Time-structured Dynamic Information Transfer
Monday July 7, 2025 16:20 - 18:20 CEST
P137 Coordinated Multi-frequency Oscillatory Bursts Enable Time-structured Dynamic Information Transfer

Jung Young Kim*1, Jee Hyun Choi1, Demian Battaglia*2

1Korea Institute of Science and Technology (KIST), Seoul, South Korea
2Functional System Dynamics / LNCA UMR 7364, University of Strasbourg, France

*Email: jungyoungk51@kist.re.kr; dbattaglia@unistra.fr


Introduction
Slower (e.g., beta) and faster (e.g., gamma) oscillatory bursts have been linked to multiplexed neural communication, respectively relaying top-down expectations and bottom-up prediction errors [1,2]. These signals target distinct cortical layers with different dominant frequencies [3]. However, this theory faces challenges: multiplexed routing might not require distinct frequencies [4], and phasic enhancement from slow oscillations may be too sluggish to modulate faster oscillatory processes. What fundamental functional advantage, then, could multi-frequency oscillatory bursting offer?



Methods
We investigate information transfer between two neural circuits (e.g., different cortical layers or regions) generating sparsely synchronized, transient oscillatory bursts with distinct intrinsic frequencies in spiking neural networks [5]. Through a systematic parameter space exploration, guided by unsupervised classification, we uncover a diverse range of Multi-Frequency Oscillatory Patterns (MFOPs). These include configurations in which the populations emit bursts at their natural frequencies, deviating from them, or even at more than one frequency simultaneously or sequentially. We then use transfer entropy [6] between simulated multi-unit activity and analyses of single unit spike transmission to assess functional interactions.

Results
We demonstrate that distinct MFOPs correspond to different Information Routing Patterns (IRPs), dynamically boosting or suppressing transfer in different directions at precise times, forming thus specific temporal graph motifs. Notably, the “slow” population can send information with latencies shorter than a fast oscillation period and also affect multiple faster cycles within a single slow cycle. Supported by precise analyses of the spiking dynamics of synaptically-coupled single neurons, we propose that MFOPs act as complex "attention mechanisms" (in the sense of ANNs) as they provide a controllable way to selectively weight the relevance of different incoming inputs, as a function of their latencies relative to currently emitted spikes.

Discussion
Our findings show that the coexistence and coordination of oscillatory bursts at different frequencies enables rich, temporally-structured choreographies of information exchange, moving well beyond simple multiplexing (one direction = one frequency). The presence of multiple frequencies considerably expands the repertoire of possible space-time information transfer patterns, providing a resource that could be harnessed to support distinct functional computations. Notably, multi-frequency oscillatory bursting could provide a self-organized manner to tag spiking activity with sequential context information, reminiscent of attention masks in transformers or other ANNs.




Figure 1. A) Networks of spiking neurons with "hardwired" slow and fast oscillatory frequencies. B) Because of network interactions, these networks develop MFOPs with different frequency properties bypassing frequency hardwiring. We extract these bursting events (C) and show that they systematically correspond to spatiotemporal motifs of information transfer (D), aka Information Routing Patterns (IRPs)
Acknowledgements
STEAM Global (Korea Global Cooperative Convergence Research Program)
References[1] Bastos, A.M., et al. (2015). Neuron 85, 390. [2] Bastos, A. M., et al. (2020) Proc Natl Ac Sci 117, 31459. [3] Mendoza-Halliday, D., et al. (2024) Nature Neurosci 27, 547. [4] Battaglia, D., et al. (2012). PLoS Comp Biol 8, e1002438. [5] Wang, X.J., and Buzsáki, G.B. (1996). J Neurosci 16, 6402–6413. [6] Palmigiano, A., et al. (2017). Nat Neurosci 20, 1014.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P138: Quantifying harmony between direct and indirect pathways in a spiking neural network of the basal ganglia; healthy and Parkinsonian states
Monday July 7, 2025 16:20 - 18:20 CEST
P138 Quantifying harmony between direct and indirect pathways in a spiking neural network of the basal ganglia; healthy and Parkinsonian states

Sang-Yoon Kim andWoochang Lim*
Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Korea
*Email: wclim@icn.re.kr

The basal ganglia (BG) show a variety of functions for motor and cognition. There are two competitive pathways in the BG; direct pathway (DP) which facilitates movement and indirect pathway (IP) which suppresses movement. It is well known that diverse functions of the BG may be made through ‘‘balance’’ between DP and IP. But, to the best of our knowledge, so far no quantitative analysis for such balance was done. In this paper, as a first time, we introduce the competition degreeCdbetween DP and IP. Then, by employingCd, we quantify their competitive harmony (i.e., competition and cooperative interplay), which could lead to improving our understanding of the traditional ‘‘balance’’ so clearly and quantitatively. We first consider the case of normal dopamine (DA) level of phi*=0.3. In the case of phasic cortical input (10 Hz), a healthy state withCd*= 2:82 (i.e., DP is 2.82 times stronger than IP) appears. In this case, normal movement occurs via harmony between DP and IP. Next, we consider the case of decreased DA level, phi = phi*(= 0.3)xDA(1 >xDA>0). With decreasingxDAfrom 1, the competition degreeCdbetween DP and IP decreases monotonically fromCd, which results in appearance of a pathological Parkinsonian state with reducedCd. In this Parkinsonian state, strength of IP is much increased than that in the case of normal healthy state, leading to disharmony between DP and IP. Due to such break-up of harmony between DP and IP, impaired movement occurs. Finally, we also study treatment of the pathological Parkinsonian state via recovery of harmony between DP and IP.



Acknowledgements

References
[1] Kim,S.-Y., & Lim, W. (2024). Quantifying harmony between direct and indirect pathways in the basal ganglia; healthy and Parkinsonian states.Cognitive Neurodynamics,18, 2809-2829.https://doi.org/10.1007/s11571-024-10119-8
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P139: Break-up and recovery of harmony between direct and indirect pathways in a spiking neural networ the basal ganglia; Huntington's disease and treatment
Monday July 7, 2025 16:20 - 18:20 CEST
P139 Break-up and recovery of harmony between direct and indirect pathways in a spiking neural networ the basal ganglia; Huntington's disease and treatment

Sang-Yoon Kim andWoochang Lim*
Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education, Daegu 42411, Korea
*Email: wclim@icn.re.kr

The basal ganglia (BG) in the brain exhibit diverse functions for motor, cognition, and emotion. Such BG functions could be made via competitive harmony between the two competing pathways, direct pathway (DP) (facilitating movement) and indirect pathway (IP) (suppressing movement). As a result of break-up of harmony between DP and IP, there appear pathological states with disorder for movement, cognition, and psychiatry. In this paper, we are concerned about the Huntington’s disease (HD), which is a genetic neurodegenerative disorder causing involuntary movement and severe cognitive and psychiatric symptoms. For the HD, the number of D2 SPNs (ND2) is decreased due to degenerative loss, and hence, by decreasingxD2(fraction ofND2), we investigate break-up of harmony between DP and IP in terms of their competition degreeCd, given by the ratio of strength of DP (SDP) to strength of IP (SIP) (i.e.,Cd=SDP/SIP). In the case of HD, the IP is under-active, in contrast to the case of Parkinson’s disease with over-active IP, which results in increase inCd(from the normal value). Thus, hyperkinetic dyskinesia such as chorea (involuntary jerky movement) occurs. We also investigate treatment of HD, based on optogenetics and GP ablation, by increasing strength of IP, resulting in recovery of harmony between DP and IP. Finally, we study effect of loss of healthy synapses of all the BG cells on HD. Due to loss of healthy synapses, disharmony between DP and IP increases, leading to worsen symptoms of the HD.



Acknowledgements

References
[1]Kim,S.-Y., & Lim, W. (2024). Break-up and recovery of harmony between direct and indirect pathways in the basal ganglia; Huntington's disease and treatment.Cognitive Neurodynamics,18, 2909-2924.https://doi.org/10.1007/s11571-024-10125-w
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P140: Single-unit responses to dynamic salient visual stimuli in the human medial temporal lobe
Monday July 7, 2025 16:20 - 18:20 CEST
P140 Single-unit responses to dynamic salient visual stimuli in the human medial temporal lobe

Alina Kiseleva1, Eva van Gelder1,Hennric Jokeit1, Johannes Sarnthein2, Lukas Imbach1, Tena Dubcek1& Debora Ledergerber*11Swiss Epilepsy Clinic, Clinical Neurophysiology, Zürich, Switzerland (Debora.Ledergerber@kliniklengg.ch)2Universitätsspital Zürich, Klinik für Neurochirurgie, Zürich, Switzerland

*Email:Debora.Ledergerber@kliniklengg.ch
Introduction

The medial temporal lobe (MTL) is critical for mnemonic functions, navigation and social cognition. For many of these higher-order cognitive processes, correlates of single-neuron responses have been found in different regions of human MTL. Amygdala neurons respond to emotional stimuli [1], while hippocampus (HC) and entorhinal cortex (EC) neurons encode memory [2] and navigation [3]. Efficient encoding of task covariates depends on neurons with mixed selectivity, found in rodent subiculum and EC [4]. While this coding scheme has been described in human MTL [5], it remains elusive whether it is applied differentially in different contexts.

Methods
We investigated the activity of 500 neurons in human MTL while participants watched a movie with alternating neutral and emotionally charged clips [6]. To model neuronal firing rates, we applied a Generalized Linear Model, using three covariates: trial type (Face/Landscape), size of the dominant object, and its movement across frames. We then implemented a model selection procedure to identify neurons specifically tuned to each covariate.
Results
We found the highest number of neurons encoding the difference between trials of landscapes versus emotional faces (14%). A smaller but substantial population of neurons showed specificity for the main object size and degree of movement (5% and 6%). Additionally, 3% of neurons demonstrated mixed selectivity, responding to the combination of at least two visual features.
Despite the amygdala's established role in processing of emotional stimuli, we found only a slightly increased number of neurons specific to emotional trials in the amygdala compared to HC and EC, and the difference in the proportion of emotionally responsive neurons across the MTL was not statistically significant (P > 0.9, χ² test).
Discussion
Overall, it suggests emotional stimulus processing is distributed across MTL regions and neurons encoding emotional stimuli may additionally show selectivity for other task features. The presence of mixed selectivity further highlights the integrative role of MTL neurons in processing complex visual and emotional information, potentially supporting flexible cognitive functions.




Acknowledgements
We sincerely appreciate the time and contribution of all patients who participated in this study. We are also grateful to our colleagues and collaborators for their insightful discussions and support. We extend our deep gratitude to the clinical staff for their invaluable assistance in data collection.
References
1.https://doi.org/10.1073/pnas.1323342111
2.https://doi.org/10.1523/jneurosci.1648-20.2020
3.https://doi.org/10.1038/nn.3466
4.https://doi.org/10.1016/j.celrep.2021.109175
5.https://doi.org/10.1016/j.celrep.2024.114071
6.https://doi.org/10.1038/s41597-020-00790-x
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P142: Finite-sampling bias correction for discrete Partial Information Decomposition
Monday July 7, 2025 16:20 - 18:20 CEST
P142 Finite-sampling bias correction for discrete Partial Information Decomposition

Loren Koçillari*1,2, Gabriel M. Lorenz1,4, Nicola M. Engel1, Marco Celotto1,5, Sebastiano Curreli3, Simone B. Malerba1, Andreas K. Engel2, Tommaso Fellin3, and Stefano Panzeri1
1Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
2Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
3Istituto Italiano di Tecnologia, Genova, Italy
4Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
5Department of Brain and Cognitive Sciences, Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
*Email: l.kocillari@uke.de
Introduction

A major question in neuroscience is how groups of neurons interact to generate behavior. Shannon Information Theory has been widely used to quantify dependencies among neural units and cognitive variables [1]. Partial Information Decomposition (PID) [2,3] has extended Shannon theory to decompose neural information into synergy, redundancy, and unique information. Discrete versions of PID are suitable for spike train analysis. However, estimating information measures from real data is subject to a systematic upward bias due to limited sampling [4], an issue that has been largely overlooked in PID analyses of neural data.

Methods
Here, we first studied the bias of discrete PID through simulations of neuron pairs with varying degrees of synergy and redundancy, using sums of Poisson processes with individual and shared terms modulated by stimuli. We assumed that the bias of union information (the sum of unique information and redundancy) equals that of the information obtained from stimulus-uncorrelated neurons. We found that this assumption accurately matched simulated data, allowing us to derive analytical approximations of PID biases in large sample sizes. We used this knowledge to develop efficient bias-correction methods, validating them on empirical recordings from 53,113 neuron pairs in the auditory cortex, posterior parietal cortex, and hippocampus of mice.
Results
Our results show that limited sampling bias affects all terms in discrete PIDs, with synergy exhibiting the largest upward bias. The bias of synergy grows quadratically with the number of possible discrete responses of individual neurons, whereas the bias of unique information scales linearly and has intermediate values, while redundancy remains almost unbiased. Thus, neglecting or failing to correct for this bias leads to substantially inflated synergy estimates. Simulations and real data analyses showed that our bias-correction procedures can mitigate this problem, leading to much more precise estimates of all PID components.

Discussion
Our study highlights the systematic overestimation of synergy in both simulated and empirical datasets, underscoring the need for bias-correction methods, and offers empirically validated ways to correct for this problem. These findings provide a computational and theoretical basis for enhancing the reliability of PID analyses in neuroscience and related fields. Our work informs experimental design by providing guidelines on the sample sizes required for unbiased PID estimates and supports computational neuroscientists in selecting efficient PID bias-correction methods.





Acknowledgements
This work was supported by the NIH Brain Initiative grant U19 NS107464 (to SP and TF), the NIH Brain Initiative grant R01 NS109961 and R01 NS108410, the Simons Foundation for Autism Research Initiative (SFARI) grant 982347 (to SP), the European Union’s European Research Council grants NEUROPATTERNS 647725 (to TF) and cICMs ERC-2022-AdG-101097402 (to AKE).
References
1.Quian Quiroga, R, Panzeri, S (2009). Extracting information from neuronal populations: information theory and decoding approaches.Nature Reviews Neuroscience, 10, 173-185.
2.Williams, PL, Beer, RD (2010). Nonnegative decomposition of multivariate information.arXiv preprintarXiv:1004.2515.
3.Bertschinger, N, Rauh, J, Olbrich, E, Jost, J, Ay, N. (2014). Quantifying unique information.Entropy, 16, 2161-2183.
4.Panzeri, S, Treves, A (1996). Analytical estimates of limited sampling biases in different information measures.Network: Computation in neural systems, 7, 87-107.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P143: Redundant stimulus encoding in ferret cortex during a lateralized detection task
Monday July 7, 2025 16:20 - 18:20 CEST
P143 Redundant stimulus encoding in ferret cortex during a lateralized detection task

Loren Koçillari*1,2, Edgar Galindo-Leon2, Florian Pieper2, Stefano Panzeri1, Andreas K. Engel2
1Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany

2Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), 20246 Hamburg, Germany

*Email: l.kocillari@uke.de
Introduction

The brain’s ability to integrate diverse sources of information is crucial for perception and decision-making. It can combine inputs synergistically to increase information capacity or redundantly to enhance signal reliability and robustness. Previous research has shown that redundant information between mouse auditory neurons increases during correct compared to incorrect trials in a tone discrimination task [1]. However, it remains unclear how redundancy’s behavioral role generalizes at larger scales, across frequency bands, and between unimodal and multimodal sensory stimuli. Using Partial Information Decomposition (PID) [2], we analyze redundant and synergistic information in ferret cortical activity during an audiovisual task.

Methods
We studied information processing in behaving ferrets during a visual or audiovisual stimulus detection task [3]. Brain activity from auditory, visual, and parietal areas of the left hemisphere was recorded using a 64-channel ECoG array [3]. We quantified task-related changes in single-channel local field potential (LFP) power and phase across time and frequency bands. We assessed stimulus encoding in individual channels by computing time-resolved Shannon mutual information between stimulus location and LFP power or phase. Finally, using PID, we quantified behaviorally relevant synergistic and redundant stimulus-related information conveyed by channel pairs at information peaks, in relation to correct choices and faster reaction times.

Results
We found that stimulus information, for both LFP power and phase, was primarily present in the peri-stimulus interval at lower frequency bands (theta and alpha), while beta and gamma bands contained less information. Stimulus information in the theta band was greater in hit trials than in miss trials and in fast-hit trials than in slow-hit trials, suggesting that the information content of theta activity is behaviorally relevant. Redundancy across channel pairs in the theta-band was higher in hit than in miss trials and in fast-hit trials than in slow-hit trials, whereas synergy was greater in miss and slow-hit trials.

Discussion
Our results suggest that the amount of information encoded in the theta band is behaviorally relevant for perceptual discrimination. They also indicate that redundancy is more beneficial than synergy for correct or rapid perceptual judgements during both visual and audiovisual stimulus detection. This supports the notion that the advantages of redundancy in downstream signal propagation and robustness outweigh its limitations of the total information that can be encoded across areas.





Acknowledgements
This work was supported by the cICMs ERC-2022-AdG-101097402 (to AKE).
References
1.Koçillari, L., et al. (2023). Behavioural relevance of redundant and synergistic stimulus information between functionally connected neurons in mouse auditory cortex.Brain Informatics,10(1), 34.
2.Williams, PL, Beer, RD (2010). Nonnegative decomposition of multivariate information.arXiv preprintarXiv:1004.2515.
3.Galindo-Leon, E. E., et al. (2025). Dynamic changes in large-scale functional connectivity prior to stimulation determine performance in a multisensory task.Frontiers in Systems Neuroscience,19, 1524547.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P144: Event-driven eligibility propagation: combining efficiency with biological realism
Monday July 7, 2025 16:20 - 18:20 CEST
P144 Event-driven eligibility propagation: combining efficiency with biological realism

Agnes Korcsak-Gorzo*1,2, Jesús A. Espinoza Valverde3, Jonas Stapmanns4, Hans Ekkehard Plesser5,1,6, David Dahmen1, Matthias Bolten3, Sacha J. van Albada1,7, Markus Diesmann1,2,8,9
1Institute for Advanced Simulation (IAS-6), Computational and Systems Neuroscience, Forschungszentrum Jülich, Jülich, Germany
2Fakultät 1, RWTH Aachen University, Aachen, Germany
3Department of Mathematics and Science, University of Wuppertal, Wuppertal, Germany
4Department of Physiology, University of Bern, Bern, Switzerland
5Department of Data Science, Faculty of Science and Technology, Norwegian University of Life Sciences, Aas, Norway
6Käte Hamburger Kolleg, RWTH Aachen University, Aachen, Germany
7Institute of Zoology, University of Cologne, Cologne, Germany
8JARA-Institute Brain Structure-Function Relationships (INM-10), Forschungszentrum Jülich, Jülich, Germany
9Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany


*Email: a.korcsak-gorzo@fz-juelich.de
Introduction

Understanding the neurobiological computations underlying learning is enhanced by simulations, which serve as a critical bridge between experimental findings and theoretical models. Recently, several biologically plausible learning algorithms have been proposed for simulating spiking recurrent neural networks, achieving performance comparable to backpropagation through time (BPTT) [1]. In this work, we adapt one such learning rule, eligibility propagation (e-prop) [2], to the spiking neural network simulator (NEST) optimized for large-scale simulations.

Methods
To improve computational efficiency and enable large-scale simulations, we replace the original time-driven synaptic updates - executed at every time step - with an event-driven approach, where synapses are updated only when activated by a spike. This requires storing the e-prop history between weight updates, and with optimized history management, we significantly reduce computational overhead. Additionally, we replace components inspired by machine learning with biologically plausible mechanisms and extend the model with features such as continuous dynamics, strict locality, sparse connectivity, and approximations that eliminate vanishing terms, further enhancing computational efficiency.
Results
We demonstrate that our event-driven weight update scheme accurately reproduces the behavior of the original time-driven e-prop model (see Fig. 1) while significantly reducing computational costs, particularly in biologically realistic settings with sparse activity. We validate this approach on various biologically motivated regression and classification tasks, including neuromorphic MNIST [3]. Furthermore, we show that learning performance and computational efficiency remain comparable to those of the original model, despite the incorporation of biologically inspired features. Strong and weak scaling experiments confirm the robust scalability of our implementation, supporting networks with up to millions of neurons.
Discussion
By integrating biologically enhanced e-prop plasticity into an established open-source spiking neural network simulator with a broad and active user base, we aim to facilitate large-scale learning experiments. Additionally, this work provides a foundation for implementing other three-factor learning rules from the extensive literature in an event-driven manner. By bridging AI and computational neuroscience, our approach has the potential to enable large-scale AI networks to leverage energy-efficient biological mechanisms.




Figure 1. Implementation of event-driven e-prop demonstrated on a temporal pattern generation task. Learning occurs through updates to input, recurrent, and output synapses. The upper middle plot illustrates the correspondence between the event-driven and time-driven e-prop models.
Acknowledgements
This work was supported by Joint Lab SMBH; HiRSE_PS; NeuroSys (Clusters4Future, BMBF, 03ZU1106CB); EU Horizon 2020 Framework Programme for Research and Innovation (945539, Human Brain Project SGA3) and Europe Programme (101147319, EBRAINS 2.0); computing time on JURECA (JINB33) via JARA Vergabegremium at FZJ; and Käte Hamburger Kolleg: Cultures of Research (c:o/re), RWTH Aachen (BMBF, 01UK2104).
References
1. Werbos, P. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.
2. Bellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R., & Maass, W. (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11(1), 3625.
3. Orchard, G., Jayawant, A., Cohen, G., & Thakor, N. (2015). Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades. Frontiers in Neuroscience, 9, 437.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P145: Biophysical thalamic neuron models to probe the impact of ultrasound induced heating in the brain
Monday July 7, 2025 16:20 - 18:20 CEST
P145 Biophysical thalamic neuron models to probe the impact of ultrasound induced heating in the brain

Rikinder Kour1, Ayesha Jameel2,3, Joely Smith3,4, Peter Bain5,6, Dipankar Nandi5,6, Brynmor Jones3, Rebecca Quest3,4, Wladyslaw Gedroyc2,3, Roman Borisyuk7,Nada Yousif1*

1School of Physics Engineering and Computer Science, University of Hertfordshire, UK
2Department of Surgery and Cancer, Imperial College London, UK
3Department of Imaging, Imperial College Healthcare NHS Trust, London, UK
4Department of Bioengineering, Imperial College London, UK
5Division of Brain Sciences, Imperial College London, UK
6Department of Neurosciences, Imperial College Healthcare NHS Trust, London, UK
7Department of Mathematics and Statistics, University of Exeter, Exeter, UK


* Email: n.yousif@herts.ac.uk
Introduction

High intensity focussed ultrasound (HIFU) is used for ablating thalamic neurons to treat tremor [1]. Low intensity focussed ultrasound (LIFU) can be used for neuromodulation [2] and previous modelling suggests that LIFU induces neuronal excitation via mechanical modulation of the cell membrane [3,4]. Although modelling of neural effects of HIFU is limited, understanding the effects of heating during HIFU at sub-ablative temperatures is important, as this is used for monitoring side effects and clinical improvement during tremor treatment [5]. Here we modified biophysical thalamocortical neuron models [6,7] to look at the change in firing patterns as HIFU induced heating approaches ablative temperatures.


Methods
First, we used data from magnetic resonance thermography performed during a HIFU treatment to select the temperature value for the ‘celsius’ parameter in NEURON [8]. We then examined the effect of temperature on the neuronal firing, as mediated by the parameters of gating equations [9]. Next, we added temperature dependence for the membrane capacitance, as shown experimentally [10] and in a previous modelling study [11]. We compared the effect of temperature in single neurons with one, three and 200 compartments under current clamp conditions with different input current levels [6]. Finally, we considered the impact of increasing temperature on a small network of two excitatory thalamic neurons [7] and two inhibitory reticular neurons.
Results
The thermography data (Fig. 1A) shows that at the HIFU target site, the temperature increased up to 62°C for a treatment sonication. With temperature dependent parameters of the gating equations, increasing temperatures lead to inhibition of the neuron (Fig. 1B). Interestingly, when including a temperature dependent membrane capacitance, we observed a similar pattern of results. Furthermore, we also saw the same effect of temperature on firing rate regardless of the number of compartments modelled. Finally, the network model showed that although with changing temperature the firing of the individual neurons both increased and decreased, we still observe an overall termination of firing in all neurons as the temperature exceeds 40°C.

Discussion
HIFU is commonly used to thermally ablate the thalamus and suppress tremor, via application of ultrasound energy called sonications. Test sonications are used to heat the tissue to sub-ablative temperatures to confirm targeting and test for adverse effects. This study looked at the impact of such sub-ablative heating on single neuron models and a small network representative of the target region. Our results indicate that once temperatures exceed 40°C neuronal firing is completely inhibited. Future work will extend the network model to look at downstream effects of heating. Such work will allow us to better understand the link between subablative temperature increases, suppression of tremor and adverse effects for optimising treatment.



Figure 1. Figure1: (A) The heating induced in by a HIFU treatment sonication. The target is at the centre of the image and the temperature reaches 62°C. (B) The results from simulating a single compartment thalamocortical neuron at different temperatures, when the neuron has only temperature dependent gating equations (black) and when the membrane capacitance has temperature dependence (red).
Acknowledgements
NY is funded by the Royal Academy of Engineering and the Leverhulme Trust and AJ is partially funded by Funding Neuro.
References[1] 10.3389/fneur.2021.654711 [2] 10.1016/j.cub.2013.10.029 [3] 10.1523/ENEURO.0136-15.2016 [4] 10.1088/1741-2552/ab1685 [5] doi.org/10.1002/ana.26945 [6] 10.1523/JNEUROSCI.18-10-03574.1998 [7] 10.1152/jn.1996.76.3.2049 [8] 10.1007/978-1-4614-7320-6_795-2 [9] 10.1007/978-1-4614-7320-6_236-1 [10] 10.1016/0301-4622(94)00103-Q [11] 10.3389/fncom.2022.933818
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P146: Fast Visual Reorientation in Postsubicular Head-Direction Cells Conditional on Cue Visibility
Monday July 7, 2025 16:20 - 18:20 CEST
P146 Fast Visual Reorientation in Postsubicular Head-Direction Cells Conditional on Cue Visibility

Sven Krausse1,2, Emre Neftci1,2,Alpha Renner*1
1Forschungszentrum Jülich, Aachen, Germany
2RWTH Aachen, Aachen, Germany

*Email: a.renner@fz-juelich.de
Introduction

Accurate spatial navigation relies on head-direction (HD) cells, which encode orientation in allocentric coordinates, like a neural compass [1,2]. Found, e.g., in postsubiculum (PoSub) and thalamus, HD cells integrate angular velocity signals from vestibular, proprioceptive, and optic flow inputs, recalibrating via visual cues [2] to avoid drift. Reorientation speed after cue absence is key to understanding the HD system’s dynamics and for bio-inspired models.[3]reported rapid reorientation, while[4]suggested an internal gain factor modulates it, though its mechanism remains unclear. Using a new dataset [5], we examine reorientation dynamics, finding it is fast but contingent on cue visibility.
Methods
We analyzed a dataset [5] containing head tracking and PoSub spike trains from six mice. Internal HD was decoded from spikes using a Bayesian approach [5]. Mice navigated a circular platform with dim LED cues (Fig. 1a) alternating between adjacent walls in 16 trials.Trials were excluded if >20% of the first minute after a cue switch had unreliable tracking, if movement ceased for >5 s, or if HD failed to reorient. Using head tracking data, we reconstructed each mouse’s visual field (FOV = 180°) to estimate cue visibility. Reorientation speed was quantified via exponential fits (scipy.optimize.curve_fit). Time constants (τ) were constrained to 0.1–3 s, with magnitude limits of 0–90°. Aligned mean error fits (Figs. 1b,c) estimated unconstrained τ, magnitude, and no delay.
Results
In Fig. 1d, after a cue switch, decoding error decreases from 90° as HD reorients. Reorientation does not always occur immediately but around when the cue becomes visible. Comparing error aligned in time by cue switch (Fig. 1b) vs. fitted delay (1c), the latter improves alignment and yields faster τ. Fig. 1e suggests that fitted switching times can be predicted from the mouse’s FOV, but only for “reorientation” trials (blue) where the cue appeared outside the FOV. Cues appearing within the FOV may cause a conflict between reanchoring and reorientation due to the lack of a dark phase between trials. Prediction cannot be perfect as pupil orientation and blinking are unknown. Based on these preliminary results we develop a model of reorientation dynamics to capture additional effects.
Discussion
Consistent with [3], we confirm that reorientation occurs in abrupt jumps, but alignment must consider visual FOV rather than assuming omnidirectional vision. While in [3]. mice were trained to fixate cues, FOV’s role may seem trivial but is often ignored. Our findings offer a better mechanistic understanding of the gain factor that mediates reorientation speed found by [4] in thalamus, which is not yet mechanistically explained. More broadly, our results contribute to an integrative model of HD reorientation and reanchoring, advancing both neuroscientific understanding and bio-inspired navigation systems (which we plan to build in the future [6]).



Figure 1. Fig. 1 a. Arena, platform, cues and FOV b. Decoding error aligned by cue switch c. Error aligned by fitted internal HD switch d. Single trial where cue switch occurs roughly as the cue enters FOV. Difference between red and black curves is decoding error (blue). e. Estimated time until cue becomes visible vs. fitted delay. Diagonal in black, points where cue appears within FOV in grey.
Acknowledgements
This research was funded by VolkswagenStiftung [CLAM 9C854]. For this work, the data from Duszkiewicz et al. (2024) [1] was used, and we thank the authors for making this data available. We especially thank Adrian Duszkiewicz for answering our questions and providing additional advice on the data. We thank Johannes Leugering, Friedrich Sommer and Paxon Frady for their feedback.
References
[1] Rank, J. B. (1984). Head-direction cells in the deep layers of dorsal presubiculum of freely moving rats. In Soc. Neuroscience Abstr. (Vol. 10, p. 599).
[2] Taube et al. (1990).https://doi.org/10.1523/JNEUROSCI.10-02-00420.1990
[3] Zugaro et al. (2003).https://doi.org/10.1523/JNEUROSCI.23-08-03478.2003
[4] Ajabi et al. (2023).https://doi.org/10.1038/s41586-023-05813-2
[5] Duszkiewicz et al. (2024).https://doi.org/10.1038/s41593-024-01588-5
[6] Krausse et al. (2025).https://doi.org/10.48550/arXiv.2503.08608
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P147: Latency correction in sparse neuronal spike trains with overlapping global events
Monday July 7, 2025 16:20 - 18:20 CEST
P147 Latency correction in sparse neuronal spike trains with overlapping global events

Arturo Mariani1, Federico Senocrate1, Jason Mikiel-Hunter2, David McAlpine2, Barbara Beiderbeck3,


Michael Pecka4, Kevin Lin5,Thomas Kreuz6,7∗


1Department of Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy


2Department of Linguistics, Macquarie University, Sydney, Australia


3 Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität, Munich, Germany


4 Division of Neurobiology, Faculty of Biology, Ludwig-Maximilians-Universität, Munich, Germany


5École Nationale Supérieure de l’Électronique et de ses Applications, Cergy, France


6Institute for Complex Systems (ISC), National Research Council (CNR), Sesto Fiorentino, Italy


7National Institute of Nuclear Physics (INFN), Florence Section, Sesto Fiorentino, Italy



* Email: thomas.kreuz@cnr.it


Introduction
In Kreuz et al., J Neurosci Methods 381, 109703 (2022)[1]two methods were proposed that perform latency correction, i.e., optimise the spike time alignment of sparse neuronal spike trains with well-defined global spiking events. The first one based on direct shifts is fast but uses only partial latency information, while the other one makes use of the full information but relies on the computationally costly simulated annealing. Both methods reach their limits and can become unreliable when successive global events are not sufficiently separated or even overlap.





Methods
Here[2]we propose an iterative scheme that combines the advantages of the two original methods by using in each step as much of the latency information as possible and by employing a very fast extrapolation direct shift method instead of the much slower simulated annealing.




Results
We illustrate the effectiveness and the improved performance, measured in terms of the relative shift error, of the new iterative scheme not only on simulated data with known ground truths but also on single-unit recordings from two medial superior olive neurons of a gerbil. The iterative scheme outperforms the existing approaches on both the simulated and the experimental data. Due to its low computational demands, and in contrast to simulated annealing, it can also be applied to very large datasets.

Discussion
The new method generalises and improves on the original method both in terms of accuracy and speed. Importantly, it is the only method that allows to disentangle global events with overlap.





Acknowledgements
J.M.H. and B.B. were supported in this study by an Australian Research Council Laureate Fellowship (FL 160100108) awarded to D.M.
References
[1]


Kreuz, T., Senocrate, F., Cecchini, G., Checcucci, C., Mascaro, A.L.A., Conti, E., Scaglione, A. and Pavone, F.S., 2022
Latency correction in sparse neuronal spike trains
J. Neurosci. Methods 381, 109703 (2022)
http://dx.doi.org/10.1016/j.jneumeth.2022.109703


[2]


Mariani, A., Senocrate, F., Mikiel-Hunter, J., McAlpine, D., Beiderbeck, B., Pecka, M., Lin, K. and Kreuz, T., 2025
Latency correction in sparse neuronal spike trains with overlapping global events
Journal of Neuroscience Methods 110378
https://doi.org/10.1016/j.jneumeth.2025.110378
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P148: ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P148 ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks

Laura Kriener¹⸴²
Ben von Hünerbein*¹
Kristin Völk³
Timo Gierlich¹
Federico Benitez¹
Walter Senn¹
Mihai A. Petrovici¹

¹ Department of Physiology, University of Bern, 3012 Bern, Switzerland.
² Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
³ Catlab Engineering GmbH, Grafrath, Germany

*Email: ben.vonhuenerbein@unibe.ch



Introduction
To learn complex action sequences, neural networks must maintain memories of past states. Typically, the required transients are produced by strong network recurrence. The biological plausibility of existing solutions for recurrent weight learning suffers from issues with locality (BPTT [1]), resource scaling (RTRL [2]), or parameter scales (FORCE [3]). To alleviate these, we introduce dendritic computation and a static structural scaffold to our recurrent networks. Leveraging this, our always-on local plasticity rule carves out strong attractors which generate the target activation sequences. We show that with few neurons, our model learns to reproduce complex non-Markovian sequences robustly despite external disturbances.
Methods
Our network contains two populations of structured neurons with somatic and dendritic compartments and leaky-integrator dynamics that integrate presynaptic inputs (Fig. 1a1). Output rates are computed as non-linear functions on the voltage. During development, a sparse scaffold of static somato-somatic connections with random delays is formed (Fig. 1a2,3). A teacher nudges output neurons towards a target pattern, and the somato-somatic scaffold transports this signal throughout the network. The dense, plastic, and randomly delayed somato-dendritic weights (Fig. 1a4)use these signals to adapt based on a local error-correcting learning rule [4].This gives rise to a robust dynamical attractor which generates the correct output pattern in the absence of a teacher.
Results
We demonstrate our model's ability to learn complex, non-Markovian sequences by exposing it repeatedly to a sample of Beethoven's "Für Elise" (Fig. 1b). We find that learning the recurrent weights is critical by showing that our model outperforms a same-size reservoir, both in its ability to learn and then sustain a pattern during replay (Fig. 1c). Next, we demonstrate robust learning across large ranges of the network parameter space. Further, despite severe temporary disruptions of the output population activity during pattern replay, the network is able to recover a correct replay of the learned pattern. Finally, we show that our network is able to extract the denoised signal from noisy target activities.
Discussion
Compared to other models of sequence learning in cortex, we suggest that ours is more resource-efficient, more biologically plausible, and, in general, more robust. It starts with only a sparse, random connection scaffold generating weak and unstructured activity. We show that this is enough for local plasticity to extract useful information in order to imprint strong attractor dynamics, in a manner that is robust to parameter variability and external disturbance. Unlike other approaches, learning in our networks is phaseless and is not switched off during validation and replay.




Figure 1. (a) Development and learning in ELiSe. (a1) Sparse somato-somatic scaffold based on p and q (a2) with interneuron driven inhibition (a3). Dense somato-dendritic synapses (green) adapted during learning (a4). (b) Learning in early, intermediate and final stages (teacher removal at red line). (c) Learning accuracy and stability during learning and replay compared to an equivalent reservoir.
Acknowledgements
We thank Richard Hahnloser and his lab for valuable feedback on learning in songbirds. We gratefully acknowledge funding from the European Union for the Human Brain Project (grant #945539) and Fenix Infrastructure resources (grant #800858), the Swiss National Science Foundation (grants #310030L\_156863 and #CRSII5\_180316) and the Manfred Stärk Foundation.


References
[1] Werbos,Paul J. "Backpropagation through time: what it does and how to do it" Proceedings of the IEEE 78.10 (1990): 1550-1560.
[2] Marschall,Owen,Kyunghyun Cho, and Cristina Savin. "A unified framework of online learning algorithms for training recurrent neural networks." Journal of machine learning research 21.135 (2020): 1-34.
[3]Sussillo,David, and Larry F. Abbott. "Generating coherent patterns of activity from chaotic neural networks" Neuron 63.4 (2009): 544-557.
[4] Urbanczik, Robert, and Walter Senn. "Learning by the dendritic prediction of somatic spiking" Neuron 81.3 (2014): 521-528.


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P149: Ion Channel Contributions to Spike Timing Precision in Computational Models of CA1 Pyramidal Neurons: Implications for Channelopathies
Monday July 7, 2025 16:20 - 18:20 CEST
P149 Ion Channel Contributions to Spike Timing Precision in Computational Models of CA1 Pyramidal Neurons: Implications for Channelopathies


Anal Kumar*1, Upinder S. Bhalla1

1National Centre for Biological Science, Tata Institute of Fundamental Research, Bangalore, India

*Email:analkumar@ncbs.res.in
Introduction

Precise neuronal spike timing is essential for encoding [1,2], phase coding [3,4], and spike-timing-dependent plasticity (STDP) [5,6]. Disruptions in spike timing precision (SpTP) are linked to disorders such as auditory processing disorder [7] and autism spectrum disorder (ASD) [8,9]. These conditions are also associated with channelopathies [8], yet the specific contributions of different ion channels to SpTP remain unclear. In this study, we use computational models of CA1 pyramidal neurons to systematically examine how ion channel overexpression and underexpression affect SpTP, providing insights into disease mechanisms and potential therapeutic targets.


Methods
We constructed data-driven, conductance-based models of CA1 pyramidal neurons, incorporating realistic electrotonic, passive, and active features based on experimental recordings. Twelve ion channel subtypes were included, with kinetics derived from prior studies. To evaluate SpTP, we analyzed the coefficient of variation of inter-spike intervals and jitter slope across multiple trials of tonic 150 pA current injections. Gaussian noise was added to these current injections to simulate physiological noise. To determine the impact of early vs late activating ion channels on SpTP, we assessed SpTP separately for initial and later spikes in the spike train.


Results
Due to heterogeneity in the Gbar of ion channels across models, individual models exhibited variable effects of Gbar on SpTP. However, some global trends emerged:
● Initial spikes in the action potential train: SpTP negatively correlated with HCN and persistent sodium (Na_P) channels, while Kv3.1 showed a positive correlation. Transient sodium (Na_T) channels exhibited a non-monotonic relationship.
● Later spikes in the action potential train: SpTP negatively correlated with Na_P, whereas Kv3.1, K_SK, K_BK, and K_P showed a positive correlation.


Other channels, including K_P, K_T, K_M, K_D, and calcium channels (LVA, HVA), showed no significant impact on SpTP across trials.


Discussion

Previous studies have reported increased K_SK currents and reduced SpTP of later spikes in Fragile X Syndrome (FXS) [8]. Our findings corroborate this by demonstrating a positive correlation between K_SK Gbar and SpTP of later spikes, suggesting that K_SK upregulation may contribute to impaired temporal precision in FXS. Additionally, our study identifies potential therapeutic targets, such as Na_P channel blockade, which may help counteract the SpTP deficits observed in FXS. Further analysis of these models will help uncover the underlying mechanisms driving these correlations, shedding light on the role of ion channel dysfunction in neurodevelopmental disorders.





Acknowledgements
We thank NCBS, TIFR and Department of Atomic Energy, Government of India, under project identification No. RTI 4006 for funding. Special thanks to Dr. Deepanjali Dwivedi and Anzal KS for the raw experimental recordings. Thanks to NCBS animal house, Imaging facility, super computing facility at NCBS and members of Bhalla Lab.
References
● https://doi.org/10.1126/science.1149639
● https://doi.org/10.1103/PhysRevLett.80.197
● https://doi.org/10.1038/nature02058
● https://doi.org/10.1002/hipo.450030307
● https://doi.org/10.1523/JNEUROSCI.18-24-10464.1998
● https://doi.org/10.1126/science.275.5297.213
● https://doi.org/10.1016/j.heares.2015.06.014
● https://doi.org/10.1523/ENEURO.0217-19.2019
● https://doi.org/10.1016/j.neuron.2017.12.043


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P150: Network vs ROI Perspectives: Brain Connectivity Analysis using Complex Principal Component Analysis
Monday July 7, 2025 16:20 - 18:20 CEST
P150 Network vs ROI Perspectives: Brain Connectivity Analysis using Complex Principal Component Analysis

Puneet Kumar*1†, Alakhsimar Singh2†, Xiaobai Li1,3, Shella Keilholz4, Eric H. Schumacher5


1University of Oulu, Finland
2National Institute of Technology Jalandhar, India
3Zhejiang University, China
4Emory University, USA
5Georgia Institute of Technology, USA

†Equal Contribution
*Email: puneet.kumar@oulu.fi

Introduction: We implement Complex Principal Component Analysis (CPCA) [1] for brain connectivity analysis. It largely reproduces traditional Quasi-Periodic Patterns (QPP)-like activity [2] and handles tasks of various lengths, while QPP struggles with shorter tasks. We present network- and ROI-level observations for the Human Connectome Project (HCP) data having four 15-min rest scans (TR=0.72s) and seven tasks (1 hour total) [3]. Our focus is on Task-Positive Network (TPN) – defined as the Dorsal Attention Network (DAN) plus Fronto-Parietal Network (FPN), and Default Mode Network (DMN). Our contributions are CPCA implementation and dual (network and ROI) level analysis. The implementation code is at github.com/MIntelligence-Group/DBCATS.
Methods: The data was preprocessed using the Configurable Pipeline for the Analysis of Connectomes (C-PAC) [4], including motion and slice-timing correction, normalization to MNI space, and band-pass filtering (0.01–0.1 Hz). We focus on the working memory (0-back/2-back) task with 405 frames/run. Each run has eight 42.5 s task blocks (10 trials of 2.5 s), four 15 s fixation blocks, and 2 s stimuli followed by a 500 ms ITI. The DMN (36 ROIs), DAN (33 ROIs), and FPN (30 ROIs) were defined using the 7-network parcellation [5]. We adapted CPCA for fMRI by applying the Hilbert transform to introduce a 90° phase shift, capturing amplitude and phase. Seven principal components (PCs) were extracted to reconstruct the dominant activity patterns.
Results: Fig. 1(a) and 1(e) display Blood Oxygenation Level Dependent (BOLD) activation at the global network level for rest and task states. Correlation values between DMN and DAN are -0.99, and between DMN and FPN -0.91, as per Fig. 1(i) and 1(j). Fig. 1(b–d) depicts local ROI-level BOLD activation (from both left and right hemispheres of the brain) during rest, and Fig. 1(f–h) during task. In the rest state, FPN shows 440 positive and 618 negative correlations with DMN, and DAN shows 531 and 629. For the task state, FPN has 439 positive and 620 negative correlations with DMN, and DAN has 532 and 629. Comparing Fig. 1(k–n) indicates slightly shifted connectivity patterns from rest to task, reflecting changes in DMN, DAN, and FPN signals.
Discussion: At the network level, DMN shows anticorrelation with both DAN (-0.99) and FPN (-0.91), as depicted in Fig. 1(i,j). At the ROI level, 44% (1972) of DMN-TPN pairs are positively correlated, while 56% (2496) are negative, indicating local differences. Correlations become more negative from rest to task, though changes are modest. Fig. 1(k–n) shows these changes, highlighting how brain connections adapt at the ROI level and exhibit task-dependent shifts. We are the first to implement CPCA as a potential brain connectivity analysis method comparing rest and task. We aim to extend our implementation to other datasets, seeking visibility for our work and findings and feedback to refine our approach and drive further advancements.



Figure 1. Network-level and ROI-level BOLD time series for DMN, DAN, and FPN during rest (a–d) and task (e–h). Network-level correlation connectivity matrices (CCM) (i, j). ROI-level CCMs for DMN–DAN regions (k, l) and DMN-FPN regions (m, n) for rest and task. (a, e) show average PC1 activity at network level, while (b–d, f–h) show PC1 activity at ROI level, with different colors denoting different ROIs.
Acknowledgements
The authors gratefully acknowledge the collaboration with the CoNTRoL Lab and GSU/GT Center for Advanced Brain Imaging at Georgia Institute of Technology, USA, and the Keilholz Mind Lab at Emory University, USA. We thank the CMVS International Research Visit Program 2024 for funding and the University of Oulu, Eudaimonia Institute, and CSC Finland for support and computational resources.
References
[1] Bolt, T.,... (2022). A Parsimonious Description of Global Functional Brain Organization in Three Spatiotemporal Patterns. Nature Neuroscience, 25(8), 1093-1103.
[2] Abbas, A.,... (2019). Quasi-Periodic Patterns Contribute to Brain Functional Connectivity. Neuroimage, 191, 193-204.
[3] Van Essen, D. C.,... (2012). The Human Connectome Project. Neuroimage, 62(4), 2222-2231.
[4] Craddock, C.,... (2013). Towards Automated Analysis of Connectomes. Front Neuroinform, 42 (10).
[5] Yeo, B. T.,... & Buckner, R. L. (2011). The Organization of Human Cerebral Cortex. Journal of Neurophysiology.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P151: Between near and far-fields: influence of neuronal morphology and channel density on EEG-like signals
Monday July 7, 2025 16:20 - 18:20 CEST
P151 Between near and far-fields: influence of neuronal morphology and channel density on EEG-like signals

Paula T. Kuokkanen*1, Richard Kempter1,2,3, Catherine E. Carr4, Christine Köppl5

1Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
2Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
3Einstein Center for Neurosciences Berlin, 10115 Berlin, Germany
4Department of Biology, University of Maryland College Park, College Park, MD 20742
5Department of Neuroscience, School of Medicine and Health Sciences, Research Center for Neurosensory Sciences and Cluster of Excellence “Hearing4all” Carl von Ossietzky University, 26129 Oldenburg, Germany

*Email: paula.kuokkanen@hu-berlin.de
Introduction

Both the near and far fields of extracellular neural recordings are well understood. The near field can be explained by models of ion channel activity in nearby compartments [1]. The far field can be approximated by current dipoles produced by the membrane currents of multicompartmental cells [2]. The dipole spanned between the dendrites and soma is typically assumed to be the basis of the electro-encephalography (EEG) signals of cortical pyramidal neurons [e.g. 3]; yet also their somatic spikes can be observed in the EEG [4]. Such potentials, measured relatively far away from the source but not strictly in the far field, are highly dependent on the morphology of the cell, its ion channel concentrations, and the electrodes’ positions [5].

Methods
We simulate single multi-compartment cells with NEURON and LFPy packages to study their 'mid-field' potentials. We vary the neurons’ simplified morphologies systematically, and use combinations of channel densities to compare the mid-field potentials with the dipole moments of the cells. We especially study the spatial limitation of the far-field approximation depending on the cell properties. We verify our results with the use of experimental data [6], EEG-like single-cell recordings from the auditory nerve and the auditory brainstem Nucleus Magnocellularis in the barn owl.

Results
We observe that, as expected, the dendritic-somatic dipole can determine the far and mid-fields in pyramidal cell-like morphologies. Unexpectedly, we observe that a dipole moment caused by branching axons can have a similar amplitude to the dendritic dipole in mid and far fields. Furthermore, we show that under certain conditions a somatic spike — not necessarily related to any current dipole — can contribute to fields even at a distance of 10 mm from the soma. These results match with our experimental results from the owl.
Discussion
Common assumptions about the distances from a neuronal source where far-field conditions predominate may not hold. Depending on the neuron type, both the morphology and differential densities of active ion channels across cell compartments can play a large role in creating their fields at varying distances. The axonal arborizations, because activated simultaneously by a single spike, can create a dipole [7] with a surprisingly large contribution to the far fields as compared to the dendritic-somatic dipoles. Furthermore, large somata with high densities of active currents can contribute to the extracellular field at distances of even 1 cm, violating the usual far-field assumption.



Acknowledgements
We thank Ghadi ElHasbani for helpful discussions, and Hannah Schultheiss for preliminary modeling.
This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) grant nr. 502188599.
References
1. https://doi.org/10.1152/jn.00979.2005
2. https://doi.org/10.1016/j.neuroimage.2020.117467
3. https://doi.org/10.7554/eLife.51214
4. https://doi.org/10.1016/j.neuroimage.2014.12.057
5. http://doi.org/10.1097/00004691-199709000-00009
6. https://doi.org/10.1101/2024.05.29.596509
7. https://doi.org/10.7554/eLife.26106


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P152: Differing Strategies and Neural Representations in the Same Long-Term Information Encoding Task
Monday July 7, 2025 16:20 - 18:20 CEST
P152 Differing Strategies and Neural Representations in the Same Long-Term Information Encoding Task

Tomoki Kurikawa*1

1Department of Complex and Intelligence Systems, Future University Hakodate, Hakodate, Japan
*Email: kurikawa@fun.ac.jp

Introduction

Many cognitive tasks require maintaining information across trials, such as deterministic or probabilistic reversal learning tasks. In the deterministic reversal learning task [1], for instance, the pairing between sensory cues and behavioral outcomes reverses after a fixed number of trials. To perform such a task successfully, subjects have to track the number of elapsed trials to predict reversals accurately. However, the neural representation underlying such sustained memory processes remains poorly understood.



Methods
To uncover the representations underlying task performance, we built a simple recurrent neural network (RNN) model trained on a deterministic reversal learning task using machine learning techniques. We analyzed what representations emerged and how they were formed. In this task, there were two types of blocks, and depending on the block type, the network had to alternate between two outputs (Left and Right outputs). Each block consisted of 10 trials, and the block type switched every 10 blocks iteratively. Notably, no explicit contextual cues were provided—the network had to track trial counts internally. The model was trained to produce correct outputs across 10 consecutive blocks.


Results
We found that two distinct strategies emerged after learning 10 blocks: generalization and specification. In the generalization strategy, the network discovered the underlying rule of the task. Despite being trained on only 10 blocks, it could generalize and perform correctly beyond this limit. In contrast, in the specification strategy, the network was specifically trained to complete the 10-block task but was unable to extend its performance to a larger number of blocks, such as a 20-block task.
What representations underlie these different behaviors? Our analysis revealed that different neural representations support these distinct strategies. In the generalization strategy, certain neurons specifically encoded the number of trials within a block. Their activity gradually increased across trials, and when a threshold was reached, the network switched outputs from one output to another before resetting, indicating that these neurons tracked the number of trials within a block.
In contrast, in the specification strategy, no individual neurons encoded trial counts explicitly. Instead, this information was distributed across the neural population, implying a different mechanism for task execution.




Discussion
Our findings suggest that even when performing the same task, different strategies can emerge across subjects or animals. Depending on the adopted strategy, the way long-term information is encoded across trials also varies. This computational result provides new insights into how long-term information is represented in neural systems.





Acknowledgements
The present work is supported bySpecialResearch Expenses in Future University Hakodate
References
https://doi.org/10.1523/ENEURO.0172-24.2024
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P153: Efficient estimation of mutual-information rates from spiking data by maximum-entropy models
Monday July 7, 2025 16:20 - 18:20 CEST
P153 Efficient estimation of mutual-information rates from spiking data by maximum-entropy models

Tobias Kühn*1, Gabriel Mahuas1, Ulisse Ferrari1

1Institut de la Vision, Sorbonne Université, CNRS, INSERM, Paris, France

*Email: tkuehn@posteo.de
Introduction

Neurons in sensory systems encode stimulus information into their stochastic spiking response. This is quantified by the mutual-information rate (MIR), given by the ratio of the mutual information between the activity of a spiking neuron and a (dynamical) stimulus and time. The computation of the MIR is challenging, because it requires the estimation of entropies, in particular the ones conditional on the stimulus. However, this is difficult in the realm of correlated, poorly sampled data, for which estimates are prone to biases.
Methods
We here present moment-based the mutual-information-rate approximation (Moba-MIRA), a computational method to estimate the MIR. It is based on the idea of taking into account the statistics of the activity single time bins exactly and consider the correlations of the activity between them by employing a statistical model featuring pairwise interactions, similar to the Ising model of statistical physics. This is similar to other maximum-entropy approaches employed in neuroscience, however, we do not restrict our spike counts to be binary, allowing the use of relatively large time bins. To achieve the estimate of the entropies, we use a (Feynman) diagrammatic expansion in the covariances between the activities of all time bins [1,2,3].

ResultsWe test our method on artificial data from a generalized linear model mimicking the activity of retinal ganglion cells and demonstrate that we approximate the exact result in the well-sampled regime in a satisfactory way. Importantly, our method introduces only a limited bias even in case of a number of samples attainable in experiments, about 60 to 100, allowing it to use it for to real data. Applying it to ex-vivo electrophysiological recordings from rat retinal-ganglion cells (on and off), stimulated by black-and-white checkerboards or bars moving in a random way, we obtain information rates of about 2 to 20 bits/s for every neuron, consistent with values from the literature.

DiscussionTested on artificial data, Moba-MIRA outperforms the state-of-the-art method [4] - depending on the variant clearly in speed, with comparable precision, or in precision, with comparable speed, compare figure. We therefore believe that it can serve as a efficient and simple tool for the analysis of spiking data. In particular, extending it to be applicable to populations of neurons is easy, so that it will allow the study of collective effects in addition to the effects coming about by neuronal dynamics.




Figure 1. a) Estimate of the MIR for artificial, retina-like data with state-of-the-art method by Strong et al. (histogram) and our approach. In the latter, we estimate the entropy conditional on the stimulus by a maximum-entropy model, for which we show the compute time in panel b.
Acknowledgements
We acknowledge ANR for financial support.
References
[1] Tobias Kühn and Moritz Helias. Expansion of the effective action around non-gaussian theories. Journal of Physics A: Mathematical and Theoretical, 51(37):375004, Aug 2018.
[2] Tobias Kühn and Frédéric van Wijland. Diagrammatics for the inverse problem in spin systems and simple liquids. Journal of Physics A: Mathematical and Theoretical, 56(11):115001, Feb 2023.
[3] Gabriel Mahuas, Olivier Marre, Thierry Mora, and Ulisse Ferrari. Small-correlation expansion to quantify information in noisy sensory systems. Phys. Rev. E, 108:024406, Aug 2023.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P154: Comparison of derivative-based and correlation-based methods to estimate effective connectivity in neural networks
Monday July 7, 2025 16:20 - 18:20 CEST
P154 Comparison of derivative-based and correlation-based methods to estimate effective connectivity in neural networks


Niklas Laasch1, Wilhelm Braun1,2, Lisa Knoff1, Jan Bielecki2, Claus C. Hilgetag1,3


1Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany


2Faculty of Engineering, Department of Electrical and Information Engineering,Kiel University, Kaiserstrasse 2, 24143, Kiel, Germany

3Department of Health Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA, 02215, USA



E-Mail:niklas.laasch@posteo.de
Introduction
Inferring effective connectivity in neural systems from observed activity patterns remains a challenge in neuroscience. Despite numerous techniques being developed, no universally accepted method exists for determining how network nodes mechanistically affect one another. This limits our understanding of neural network structure and function. We focus on purely excitatory networks of small to intermediate size with continuous dynamics to systematically compare different connectivity estimation approaches, aiming to identify the most reliable methods for specific network characteristics.Methods
We used the Hopf neuron model with known ground truth structural connectivity to generate synthetic neural activity data. Multiple connectivity inference algorithms were applied to reconstruct the system's connectivity matrix, including lagged cross-correlation (LCC) [1], derivative-based covariance analysis (DDC) [2], and transfer entropy methods. We varied parameters controlling bifurcation, noise, and delay distribution to test method robustness. Forward simulations using estimated connectivity matrices were performed to evaluate each method's ability to recreate observed activity patterns. Finally, we applied promising methods to empirical data fromC. elegans.Results
In sparse non-linear networks with delays, combining LCC with DDC analysis provided the most reliable connectivity estimation. LCC performed comparably to transfer entropy in linear networks but at significantly lower computational cost. Performance was optimal in small sparse networks and decreased in larger, denser configurations. With the Hopf model, LCC-based connectivity estimates yielded higher trace-to-trace correlations than derivative-based methods for sparse noise-driven systems. When applied toC. elegansneural data, LCC outperformed more computationally expensive methods, including a reservoir computing approach.Discussion

Our findings demonstrate that a comparatively simple method - lagged cross-correlation - can reliably estimate directed effective connectivity in sparse neural systems despite spatio-temporal delays and noise. This has significant implications for biological research scenarios where only neuronal activity, but not connectivity or single-neuron dynamics, is observable. We provide concrete suggestions for effective connectivity estimation in such common research scenarios. Our work contributes to bridging the gap between observed neural activity and underlying network structure in neuroscience.



Acknowledgements
The authors would like to thank Kayson Fakhar, Alexander Schaum, Fatemeh Hadaeghi, Arnaud Messé, Gorka Zamora-López and Heike Siebert for useful comments.
References
[1] 10.1038/s41598-025-88596-y
[2]10.1073/pnas.2117234119
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P155: Bayesian Modelling of Explicit and Implicit Timing
Monday July 7, 2025 16:20 - 18:20 CEST
P155 Bayesian Modelling of Explicit and Implicit Timing

Gianvito Laera*1,2,3,4, Matthew Vowels5,6,7, Tasnim Daoudi1,2, Richard Andrè1,2, Sam Gilbert8, Sascha Zuber2,3, Matthias Kliegel1,2,3, Chiara Scarampi2,3

1Cognitive Aging Lab (CAL), Faculty of Psychology and Educational Sciences, University of Geneva, Switzerland
2Centre for the Interdisciplinary Study of Gerontology and Vulnerability, University of Geneva, Switzerland
3LIVES, Overcoming Vulnerability: Life Course Perspective, Swiss National Centre of Competence in Research, Switzerland
4University of Applied Sciences and Arts Western Switzerland HES-SO, Geneva School of Health Sciences, Geneva Musical Minds lab (GEMMI lab), Geneva, Switzerland
5Institute of Psychology, University of Lausanne, Switzerland
6The Sense Innovation and Research Center, CHUV, Switzerland
7Centre for Vision, Speech and Signal Processing, University of Surrey, Switzerland
8Institute of Cognitive Neuroscience, University College London, London, United Kingdom


*Email: gianvito.laera@unige.ch

Introduction
Time perception supports adaptive behavior by allowing anticipation of critical events [1]. Explicit timing involves conscious estimation of durations (e.g., interval reproduction), typically modeled by Bayesian frameworks combining noisy sensory evidence with prior expectations [2]. Implicit timing emerges indirectly through tasks like foreperiod paradigms, relying on neural or motor strategies without temporal awareness. Historically treated separately, explicit tasks engage cortico-striatal circuits, whereas implicit tasks involve cerebellar or parietal regions. We hypothesized that a unified Bayesian model with a shared internal clock parameter (θ) could bridge explicit and implicit timing abilities.
Methods
Forty-five psychology students performed four within-participant tasks: Explicit Motor (spontaneous motor response), Implicit Motor (simple reaction time), Explicit Temporal (interval reproduction), and Implicit Temporal (stimulus prediction). A hierarchical Bayesian model estimated an internal clock rate parameter (θ), reflecting subjective timing (θ=1 accurate; θ>1 slower; θ<1 faster clock), alongside parameters modeling task-specific variability and individual learning effects. Explicit tasks involved duration reproduction without feedback; implicit tasks involved temporal anticipation of a stimulus. Markov Chain Monte Carlo (MCMC) sampling via Stan was used for parameter estimation.
Results
The Bayesian model indicated participants’ internal clocks ran faster than objective time (μθ≈0.80), explaining interval overestimation at short durations and confirming a regression-to-mean effect. Individual differences in θ were significant (τθ≈0.20); participants with fewer practice trials had internal clocks closer to accuracy, indicating efficient learning. Explicit tasks had higher variability than implicit tasks, confirming greater cognitive uncertainty. Implicit tasks showed typical foreperiod effects (longer expected intervals slightly slowed reaction times,a≈0.3). Explicit and implicit timing shared moderate variance (r≈0.45), and network analysis suggested θ centrally bridged both timing domains.
Discussion
The findings support a unified Bayesian model, highlighting a shared internal clock mechanism underlying explicit and implicit timing. The internal clock parameter (θ) explained significant individual differences across tasks, supporting recent integrative views proposing partially overlapping neural substrates [3, 4]: a common cognitive mechanisms (maybe striatal-thalamo-cortical circuits) provide duration information utilized differently across explicit versus implicit tasks. Task-specific differences also comprise additional factors (e.g., cognitive strategies, attention and memory load) that future versions of the model should include. The model can be promising in explaining timing difficulties in clinical and aging populations too.



AcknowledgementsNone
References
1. https://doi.org/10.1016/j.neuropsychologia.2012.08.017
2. https://doi.org/10.1016/j.tics.2013.09.009
3. https://doi.org/10.1016/j.cobeha.2016.01.004
4. https://doi.org/10.1016/j.tins.2004.10.007
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P156: Population decoding of visual motion direction in V1 marmoset monkey : effects of uncertainty
Monday July 7, 2025 16:20 - 18:20 CEST
P156 Population decoding of visual motion direction in V1 marmoset monkey : effects of uncertainty

Alexandre C. Lainé*1, Sophie Denève1, Nicholas J. Priebe2, Guillaume S. Masson1, Laurent U. Perrinet1

1Institut de Neurosciences de la Timone, UMR 7289, CNRS - Aix-Marseille University, Marseille, France.
2Section of Neurobiology, School of Biological Sciences, University of Texas at Austin, Austin, TX, USA.

*Email: alexandre.laine@univ-amu.fr
Introduction

Studying the internal representation of information in the primary visual cortex (V1) is crucial to understand how we perceive the external world. Research on 2D motion direction in non-human primates [1,2,3] in particular when displaying naturalistic stimuli like MotionClouds [4] reveals substantial diversity and multiple mechanisms within the neuronal population [5]. This project aims to examine how a large population of V1 neurons encodes stimulus direction by explicitly titrating the precision in the orientation and spatial frequency domains.

Methods
Activity of several hundreds of neurons was recorded using Neuropixel 2.0 technology [6] in area V1 of an anesthetized marmoset monkey during which MotionClouds were presented for eight directions and two precision levels. We use a decoding method to analyze the representation of motion direction in the marmoset V1, focusing on the effects of uncertainty on the population code. The decoding method optimizes the weights of a logistic regression to achieve optimal decoding accuracy on a training set. Training can be conducted (1) on a broad time window, (2) by applying temporal generalization [7], or (3) after reducing dimensionality with dPCA [8].
Results
After training on broad windows, analysis of the optimised weights revealed two types of population representations: transient and sustained. These representations differ in their distributions across cortical layers, confirming earlier results obtained in another species [5], and are modulated by the level of orientation precision. The accuracy measured on the test set revealed first that a broad spatial frequency distribution leads to a better decoding performance, and second that the precision of the orientation is a critical factor in the representation of motion direction. Indeed, a high precision in orientation leads to the aperture problem, and thus to an ambiguity in the representation of motion direction. Temporal generalization confirm a stable representation. Projection of neuronal activity using dPCA onto 10 components without affect accuracy demonstrate that the information may be represented in a low-dimensional manifold.

Discussion
In summary, this decoding method clarifies how directional information is represented and modulated by precision in marmoset V1. The coexistence of transient and sustained representations indicates distinct functional roles across cortical layers. Temporal generalization confirms that the neuronal population maintains a stable encoding of direction. Reducing dimensionality while preserving precision implies that a small set of components can capture the essential features of neuronal activity, enabling the exploration of various projection methods to optimize decoding. Moreover, the results suggest that orientation precision could be a major factor in shaping the interplay between orientation and direction.




Acknowledgements
This work was supported by ANR-NSF CRCNS grant “PrioSens” N° ANR-20-NEUC-0002 attributed to G.S.M,
N.J.P. and L.U.P. and by a doctoral grant from the French Ministry of Higher Education and Research, awarded
by the Doctoral School 62 of Aix-Marseille University to A.L.
References[1] https://doi.org/10.1113/jphysiol.1959.sp006308
[2] https://doi.org/10.1113/jphysiol.1968.sp008455
[3] https://doi.org/10.1523/JNEUROSCI.1335-12.2012
[4] https://doi.org/10.1152/jn.00737.2011
[5] https://doi.org/10.1038/s42003-023-05042-3
[6] https://doi.org/10.1126/science.abf4588
[7] https://doi.org/10.1016/j.tics.2014.01.002
[8] https://doi.org/10.7554/eLife.10989
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P157: Non-monotonic Subthreshold Information Filtering in a Coupled Resonator-Integrator System
Monday July 7, 2025 16:20 - 18:20 CEST
P157 Non-monotonic Subthreshold Information Filtering in a Coupled Resonator-Integrator System

Franquelin Lambert1

1Université de Moncton, Département de physique et d'astronomie
Introduction:Subthreshold dynamics play a key role in spike generation, and it is well-known that some neurons exhibit a frequency preference when integrating subthreshold input– so-called resonators [1,2]. It has been shown, however, that despite the existence of subthreshold resonance, a single resonator neuron exhibits low-pass, i.e., monotonic, information filtering(as measured by the spectral coherence). In other words, in the subthreshold regime, band-pass impedance does not translate to band-pass information filtering. Instead, nonlinearities, such as spiking dynamics, are needed to create band-pass information transfer [3,4].



Methods:Here, we study a similar question in an electrically coupled pair of neurons. Our goal is to evaluate whether this resonance profile imparts non-trivial information filtering capabilities to the coupled system. We numerically simulate an electrically coupled integrate-and-fire to resonate-and-fire system in the subthreshold regime, and we investigate the stimulus-response spectral coherence function of the system under perturbation by coloured noise (Ornstein-Uhlenbeck process).

Results:For electrical coupling between a resonator and an integrator, we show that a Fano-like resonance profile appears in the impedance, i.e., a narrow, asymmetric peak with anti-resonance [5]. Moreover, we observe that the coherence function is non-monotonic, with a minimum around the frequency of the opposite neuron.

Discussion:This challenges the claim that neurons require nonlinearities to relay bandpass information filtering properties. This new perspective places informationfiltering in the context of connection motifs where a small number of resonators and integrators interact, rather than the context of individual neurons.





Acknowledgements
no acknowledgements
References
[1] Izhikevich, Eugene M.Dynamical systems in neuroscience. MIT press, 2007.
[2]https://doi.org/10.1016/S0893-6080(01)00078-8
[3]https://doi.org/10.1109/TMBMC.2016.2618863
[4]https://doi.org/10.1007/s10827-015-0580-6

[5]https://doi.org/10.1088/0031-8949/74/2/020
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P158: A unified model for estimating short- and long-term synaptic plasticity from stimulation-induced spiking activity
Monday July 7, 2025 16:20 - 18:20 CEST
P158 A unified model for estimating short- and long-term synaptic plasticity from stimulation-induced spiking activity

Arash Rezaei1,2, Mojtaba Madadi Asl3,4,Milad Lankarany*1,2,5


1Krembil Brain Institute, University Health Network, Toronto, ON, Canada
2Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
3School of Biological Sciences‎,‎Institute for Research in Fundamental Sciences (IPM)‎,‎Tehran‎,‎Iran
4Pasargad Institute for Advanced Innovative Solutions (PIAIS)‎,‎Tehran‎,‎Iran
5Center for Advancing Neurotechnological Innovation to Application (CRANIA), Toronto, ON, Canada



*Email:milad.lankarany@uhn.ca
Introduction

Abnormal brain activity is the hallmark of several brain disorders such as Parkinson’s disease, essential tremor, and epilepsy [1,2]. Stimulation-induced reshaping of the brain’s networks through neuroplasticity may disrupt neural activity as well as synaptic connectivity and potentially restore healthy brain dynamics. Synaptic plasticity has been the target of invasive therapies, such as deep brain stimulation [3-5]. Mathematical frameworks were able to estimate short-term [6,7] and long-term [8] synaptic dynamics separately. However, the characterization of both short and long-term synaptic plasticity from spiking activity is crucial for understanding the underlying mechanisms and optimization of spatio-temporal patterns of stimulation.


Methods
We developed a novel synapse model to integrate both short- and long-term plasticity into a unified framework wherein the postsynaptic neuron behaves according to both plasticity mechanisms. In the proposed model, the postsynaptic neuron is notably driven by the STP synaptic current and LTP synaptic weight at each step. To induce short- and long-term synaptic responses, presynaptic spike trains were applied for durations of a few hundred milliseconds (STP experiment) and hundreds of seconds (LTP experiment), respectively, to a single postsynaptic neuron. For the STP experiment, a single presynaptic spike train was used, whereas the LTP experiment involved 1000 presynaptic inputs. For both experiments depressing STP synapses were utilized.
Results
Our results demonstrated that in the STP experiment, the unified model produced the same transient fluctuations in the membrane potential of the postsynaptic neuron as observed in the STP-only model. This is evident when comparing Fig. 1.A and B as we see the same pattern of behavior in the postsynaptic membrane potential. In the LTP experiment, we observed similar long-term distribution of the synaptic weights as in the model with only long-term synapses (Fig. 1.C). However, the depression was more pronounced in the unified model due to the concurrent influence of STP and LTP on the postsynaptic neuron. The number of synapses with lower weights increases with the addition of the depressive STP mechanism compared to the LTP-only model.
Discussion
These findings suggest that the integration of STP and STDP within a single synaptic framework can effectively capture both transient and long-lasting plasticity effects. Furthermore, such uniform modeling of STP and LTP enables the incorporation of various combinations of synaptic settings into a population of neurons. This can potentially enhance the biological plausibility and flexibility of the current stimulation-induced neural models.




Figure 1. Fig. 1. Results of the STP and LTP experiments. A) Input spike train, neural and synaptic behavior of a model with only STP after stimulation. B) Behavior of the unified model with both STP and LTP after stimulation. The postsynaptic neuron was stimulated for 1200 ms with a 20 Hz firing rate and a depressing STP synapse (Red lines: postsynaptic membrane potential, Blue dotted lines: STP synaptic c
Acknowledgements
NA
References
1.https://doi.org/10.1016/j.neuron.2006.09.020
2.https://doi.org/10.1371/journal.pcbi.1002124
3.https://doi.org/10.1002/ana.23663
4.https://doi.org/10.1002/mds.25923
5.https://doi.org/10.1016/j.brs.2016.03.014
6.https://doi.org/10.1371/journal.pcbi.1008013
7.https://doi.org/10.1371/journal.pone.0273699
8.https://doi.org/10.1162/neco_a_00883

9.https://doi.org/10.7554/eLife.47314
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P159: Neun, an efficient and customizable open-source library for computational neural modeling and biohybrid circuit design.
Monday July 7, 2025 16:20 - 18:20 CEST
P159 Neun, an efficient and customizable open-source library for computational neural modeling and biohybrid circuit design.

Angel Lareo*1, Alicia Garrido-Peña1, Pablo Varona1, Francisco B. Rodriguez1

1Grupo de Neurocomputación Biológica, Departamento Ingeniería Informática, Universidad Autónoma de
Madrid

*Email: angel.lareo@uam.es
Introduction

Computational models are an effective and convenient tool for theoretically complementing the experimental
results obtained from living systems and thus understanding the brain’s complex functions. Computational
simulation of neural behavior has expanded the potential of modeling studies. There is a wide range
of tools available in Neuroscience community for this purpose [1–6]. They have enhanced the ability
of theoreticians to explain neural dynamics.Neunis a new highly customizable and fast running-time
open-source framework designed for theoretical studies in single neurons, small circuits, and biohybrid
circuit design [7–9]

Methods
Neun(github.com/gnb-UAM/neun) is an object-oriented library with heavily templated C++ . This ensures
high-level abstraction and encapsulation. Neun’s main components are: (i)ModelConcept, which provides
the foundation for synapses and neuron models (e.g. Hodgkin-Huxley and Izhikevich paradigms). (ii)
SystemWrapper, defines general elements such as parameters, variables, and numerical precision. (iii)
Integrator, offer methods like Euler and Runge-Kutta for numerical integration. (iv)DifferentialNeuronWrapper, combines models and integrators for simulation.Neunalso uses a straightforward method for
equation-to-code parsing to add new models and aims to provide compatibility with existing tools using a
Python API.
Results
As a complement to existing tools and databases,Neunprovides built-in samples of well-known neuron
and synapse models that can be easily adapted by the user for effective implementations. It can be used
as a template for fast prototyping, since it offers boilerplate code for novel modelers. Users can then go
from a black-box approach to the insides of the code. Nevertheless, the fact that the library is written in
C++ makes it an attractive option for real-time applications (such as RTXI or embedded systems), as it
demonstrates great single-threaded computing performance even without parallelization.Neunhas already
been used in previous modeling studies [7–9] and has been tested for its use in real-time experiments.
Discussion
We presentNeun, an open-source library in C ++ for computational neural modeling and simulation as a
user-friendly complement and alternative to existing tools. Among the numerous tools for neuron dynamics
simulation, there is a tendency of increasing complexity in the code base which limits its accessibility,
especially for beginners. We believeNeunhas a convenient compromise between usability and efficiency.
This can be ideal for researchers in neuroscience who do not necessarily have a background in computer
science but are willing to progressively learn, and also for experimentalist who want to build biohybrid
circuits form interacting living and model neurons and synapses.




Acknowledgements
This research was supported by grants PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00,
PID2021-122347NB-I00 (MCIN/AEI and ERDF – “A way of making Europe”).
References
[1]https://doi.org/10.1007/s10827-006-7949-5
[2]https://doi.org/10.3389/neuro.11.011.2008
[3]https://doi.org/10.1038/srep18854
[4]https://doi.org/10.7554/eLife.47314
[5]https://doi.org/10.1007/s10827-016-0623-7
[6]https://doi.org/10.1016/j.neuron.2019.05.019
[7]https://doi.org/10.3389/fninf.2022.912654
[8]https://doi.org/10.1007/978-3-031-34107-6_43
[9]https://doi.org/10.1117/1.NPh.11.2.024308
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P160: Preservation of neural dynamics across individuals during cognitive tasks
Monday July 7, 2025 16:20 - 18:20 CEST
P160 Preservation of neural dynamics across individuals during cognitive tasks

Ioana Lazar*1, Mostafa Safaie1, Juan Alvaro Gallego1


1Department of Bioengineering, Imperial College London, London, UK


*Email: ioana.lazar20@imperial.ac.uk
Introduction

Different individuals from the same species have brains that have similar organisation but differ in the details of their cellular architecture. Yet, despite these idiosyncrasies, the way in which neurons from the same region co-modulate their activity during a given motor task is remarkably preserved across individuals [1]. Such preserved neural population “latent dynamics” likely arise from the behavioural similarity as well as species-specific constraints on network connectivity. Here we asked whether cognitive tasks that can be solved using different covert strategies could lead to more individual-specific latent dynamics.


Methods
We investigated the preservation of latent dynamics in the prefrontal cortex across macaque monkeys performing an associative memory task in which they had to select the target associated with an initial cue following a “working memory period” in which no information was presented [2]. We computed session-specific latent dynamics using principal component analysis and tested their preservation across individuals using both canonical correlation analysis, which tests for similarity in the geometrical properties of neural population activity, and dynamical systems approaches. We interpreted the differences in the preservation of latent dynamics based on the differences in decoding accuracy of task variables.
Results
Prefrontal cortex latent dynamics were less preserved across individuals than inpreviousstudies of the motor system, especially during the working memory period, in which correlations were lower than during cue presentation and target selection. The level of preservation was strongly associated with how well the upcoming target's identity could be decoded, which varied across animals, hinting at potential different cognitive strategies as the cause for the lower preservation. Finally, monkeys developed idiosyncratic fidgets that reflected their cognitive processes: removing components of the latent dynamics related to movement decreased both within-monkey decoding of task variables and the preservation of latent dynamics across monkeys.
Discussion
This study builds on previous work on the motor system to show that different individuals from the same species also produce preserved latent dynamics when engaged in the same cognitive task. When the decoding analysis suggested that monkeys were employing different cognitive strategies to solve the task—relying more on retrospective or prospective memory—,the preservation of latent dynamics decreased, as it would be expected if the latent dynamics reflected the underlying computations. Neural population latent dynamics can thus capture fundamental differences and similarities in neural computation across individuals during both sensorimotor and cognitive processes.





Acknowledgements

References
1. Safaie, M., Chang, J., Park, J., Miller, L. E., Dudman, J. T., Perich, M. G., & Gallego, J. A. (2023). Preserved neural dynamics across animals performing similar behaviour.Nature, 623, 765–771. https://doi.org/10.1038/s41586-023-06714-0
2. Tremblay, S., Testard, C., DiTullio, R. W., Inchauspé, J., & Petrides, M. (2022). Neural cognitive signals during spontaneous movements in the macaque.Nature Neuroscience, 26, 295–305. https://doi.org/10.1038/s41593-022-01220-4
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P161: Core-Peripheral Network Topology Facilitates Dynamic State Transitions in the Computational Modeling of Zebrafish Brain
Monday July 7, 2025 16:20 - 18:20 CEST
P161 Core-Peripheral Network Topology Facilitates Dynamic State Transitions in the Computational Modeling of Zebrafish Brain


Dongmyeong Lee*1,3,Yelim Lee1,2, Hae-Jeong Park1,2,3



1Yonsei University College of Medicine, Seoul, South of Korea

2BK21 PLUS Project for Medical Science, Yonsei University College of Medicine, Seoul, South of Korea

3Center for Systems and Translational Brain Science, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, South of Korea


Email: dmyeong@gmail.com





Introduction



Understanding how structural network topology shapes large-scale neural dynamics is a fundamental challenge in neuroscience. In particular, core-peripheral network topology is a crucial property, where highly connected "core" regions serve as hubs for integrating information across the brain, while sparsely connected "peripheral" regions support localized processing. Although many studies have explored the influence of core-peripheral topology on brain function at the macro-scale, the relationship between core-peripheral connectivity and dynamic information processing at the cellular level remains an open question. In this study, we investigate the impact of core-peripheral connectivity on whole-brain neural dynamics in zebrafish using computational modeling by integrating cellular-resolution structural connectivity data with a large-scale spiking neural network model.
Methods
To achieve this, we reconstructed a cellular-resolution structural connectivity network and extended it to develop a large-scale spiking neural network model consisting of 50,000 neurons across 72 distinct brain regions in the zebrafish brain. By systematically varying core-peripheral connection probabilities and coupling constants in the computational model, we examined their effects on neural activity fluctuations.
Results
Our results demonstrate that the zebrafish brain exhibits a distinct core-peripheral network structure, where core regions play a critical role in dynamic signal propagation and network reconfiguration by examining cellular connectivity data. Analysis of calcium imaging data revealed that the zebrafish brain dynamically transitions between multiple states, enabling adaptive and efficient information processing. Among four different connection types, i.e., peripheral-peripheral, core-peripheral, peripheral-core, and core-core, core-to-peripheral connections exhibited the highest functional fluctuations, closely mirroring experimentally observed calcium imaging data.

Discussion

These findings highlight that core-peripheral connectivity serves as a key structural mechanism regulating state transitions, optimizing the balance between network modularity and integration. This suggests that large-scale brain networks leverage core-peripheral topology to dynamically regulate state transitions and maintain optimal neural computation. By integrating experimental data with computational modeling, this study provides novel insights into how structural connectivity underlies large-scale neural computations and functional flexibility in the brain.






Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NO. 2023R1A2C200621711)
References
Chen, X., Mu, Y., Hu, Y., Kuan, A. T., Nikitchenko, M., Randlett, O., ... & Ahrens, M. B. (2018). Brain-wide organization of neuronal activity and convergent sensorimotor transformations in larval zebrafish.Neuron,100(4), 876-890

He, B. J., Zempel, J. M., Snyder, A. Z., & Raichle, M. E. (2010). The temporal structures and functional significance of scale-free brain activity.Neuron,66(3), 353-369
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P162: Pavlovian Conditioning of a Superburst Generating Neural Network for High-precision Perception of Spatiotemporal Sensory Information
Monday July 7, 2025 16:20 - 18:20 CEST
P162 Pavlovian Conditioning of a Superburst Generating Neural Network for High-precision Perception of Spatiotemporal Sensory Information

Kyoung J. Lee*¹, Jongmu Kim², Woojun Park¹, Inhoi Jeong¹

¹ Department of Physics, Korea University, Seoul, Korea
² Department of Mechanical Engineering, Korea University, Seoul, Korea


*Email:kyoung@korea.ac.kr
Introduction

How the brain perceives, learns, and distinguishes different spatiotemporal sensory information remains a fundamental yet largely unresolved question in neuroscience [1]. This study demonstrates how an initially random network of Izhikevich neurons can learn, encode, and differentiate time intervals ranging from milliseconds to tens of milliseconds with high temporal precision using a Pavlovian conditioning framework [2]. Notably, our findings highlight the potential role of superbursts in sensory perception, offering new insights into how neural circuits process temporal information.


Methods
Our network model comprises excitatory and inhibitory neurons with synaptic weights evolving through dopamine-modulated spike-timing-dependent plasticity. The conditioning protocol involves sequential electrical stimulation of, for example, three neuron subpopulations (S0, S1, S2) with specific time intervals (Dt1cond.,Dt2cond.), referred to as “target triplet stimulation.” Despite the presence of various distracting stimuli with different time intervals, the network successfully encodes the target stimulation pattern and later responds to it by generating a distinctive population burst—a neuronal spiking avalanche—which acts as a test gauge for perception.

Results
During conditioning, the initially random network evolves into a feedforward structure [3] (Fig. 1A), where three subpopulations (S0, red; S1, blue; S2, green) self-organize according to the imposed time intervals (Dt1cond.,Dt2cond.), effectively encoding temporal information into its network morphology. With axonal conduction delays, the network generates superbursts, featuring multiple sub-burst humps, lasting tens of milliseconds (Fig. 1B). In a perception test, stimuli with varying time intervals and subpopulations produce distinct neuronal avalanches: For example, a network conditioned forDt1cond.=Dt2cond.= 11 ms exhibits systematically varying burst patterns upon receiving different stimulli (Fig. 1B and 1C).



Discussion
These findings provide insight into how seemingly simple neural circuits can encode and process temporal information through structured population spiking activity. Perception in this system can utilize the shape of stimulus-triggered population bursts, allowing for superb temporal resolution (< 1 ms). Furthermore, incorporating axonal conduction delays enables the network to generate superbursts lasting tens of milliseconds, with intricate internal temporal structures, significantly enhancing its perceptual dynamic range. This learning framework can be extended to distinguish much more complex spatiotemporal sequences beyond the simple triplet examples explored in this study.





Figure 1. Fig. 1 Encoding different sets of (Dt_1^cond., Dt_2^cond.) into network morphology (A) and perceptual testing with various (Dt_1^test, Dt_2^test) combinations (B) and subpopulations (C) for the case of (Dt_1^cond.= Dt_2^cond. = 11 ms). In (A), the colored crossbars mark the centroids of S0, S1, and S2 , reflecting the topographic encoding of temporal information (six different cases are shown).
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2024-00335928).
References
1.https://doi.org/10.1016/j.neuron.2020.08.020
2.https://doi.org/10.1093/cercor/bhl152
3.https://doi.org/10.1371/journal.pcsy.0000035
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P163: Distinct disinhibitory circuits differentially modulate the multisensory integration in a feedforward network
Monday July 7, 2025 16:20 - 18:20 CEST
P163 Distinct disinhibitory circuits differentially modulate the multisensory integration in a feedforward network

Seung-Youn Lee*1,2, Kyujin Kang1,3, Yebeen Yoon1,3, Jae-Ho Han2,3, Hyun Jae Jang1


1Korea Institute of Science and Technology, Seoul, Republic of Korea
2Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
3Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea


*Email: seungyounlee@korea.ac.kr

Introduction

Multisensory integration is a fundamental neural process that combines simultaneously presented unisensory inputs into a unified perception. For effective multisensory processing, cross-modal integration via neural network should be dynamically modulated spanning cortical and subcortical regionsin vivo[1,2]. One such mechanism is the disinhibitory circuit gating local information flow by inhibiting other inhibitory neurons. However, it is unclear whether disinhibitory circuits modulate multisensory integration locally or via long-range projections[3,4]. Therefore, we investigated how distinct disinhibition architectures differentially modulate long-range cross-modal integration, such as between the primary auditory cortex (A1) and the visual cortex (V1)[5].


Methods
To test this, we developed a computational feedforward network model incorporatingin vivo-recorded spike trains from A1 and V1. The model consists of two four-layer columns, each representing a different sensory modality, converging onto an output layer (LOUT). Neurons were modeled as single-compartment Hodgkin-Huxley-type neurons, capturing the electrophysiological properties of pyramidal (PYR), somatostatin-positive (SST+), and vasoactive intestinal polypeptide-positive (VIP+) neurons. The disinhibitory circuit was modeled such that VIP+inhibit SST+, which in turn inhibit PYR. The first layer of each column received spike trains recordedin vivoA1 and V1 during pure-tone and grating stimulation as inputs.

Results
We first assessed disinhibitory circuits in multisensory integration. A network with disinhibition exhibited higher mutual information (MIrate) between stimulus variables and firing rates of Loutthan one without, indicating enhanced integrated information transmission. To investigate how disinhibition-mediated different inhibitory circuit modulates multisensory integration, we differentiated SST+ inhibitory circuits by intra-columnar feedback (intra-FBI), intra-columnar feedforward (intra-FFI), and cross-columnar feedforward inhibition (cross-FFI). When we fedin vivospike patterns into these models, we found that MIratewas highest in intra-FFI, whereas MI for spike timing was highest with intra-FBI, implying distinct roles in neural coding.

Discussion
Our results demonstrate that disinhibitory circuits facilitate multisensory integration by dynamical modulation of long-range cross-modal interactions between A1 and V1. Specifically, our findings reveal that the intra-FFI circuit was associated with firing rates whereas intra-FBI circuit enhanced information encoded in spike timings. This suggests that distinct disinhibitory circuits selectively integrate multisensory information through different neural coding strategies. Taken together, these findings indicate that distinct disinhibitory network motifs dynamically modulate multisensory integration and may serve as a key mechanism inin vivomultisensory processing.





Acknowledgements
This research was supported by the KIST Institutional Program (2E33561) and the National R&D Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2021R1C1C2012843). J.-H. Han was supported by the MSIT, Korea, under the ITRC support program (IITP-2025-RS-2022-00156225) supervised by the IITP and by the NRF grant (No. RS-2024-00415812).
References
1. https://doi.org/10.1038/ncomms12815
2. https://doi.org/10.1016/j.conb.2018.01.002
3. https://doi.org/10.1016/j.tins.2021.04.009
4. https://doi.org/10.1007/s10827-017-0669-1

5. https://doi.org/10.1016/j.neuron.2016.01.027
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P164: Event-Driven Financial Decision Making via Spiking Neural Networks: Neuromorphic-Inspired Approach
Monday July 7, 2025 16:20 - 18:20 CEST
P164 Event-Driven Financial Decision Making via Spiking Neural Networks: Neuromorphic-Inspired Approach

Tae-hoon Lee1, Hoon-hee Kim*2
1 Department of Data Engineering, Pukyong National University, Busan, South Korea
2 Department of Computer Engineering and Artificial Intelligence, Pukyong National University, Busan, South Korea
*Email: h2kim@pknu.ac.kr
Introduction

Spiking Neural Networks (SNNs) are well-suited for financial decision-making due to their ability to capture temporal dynamics and process information in an event-driven manner. In volatile markets, price movements can be sudden and irregular, making asynchronous event-based processing critical for timely responses. SNNs naturally handle such inputs, modeling temporal patterns more effectively than traditional neural networks. In this study, we integrate SNNs with a Genetic Algorithm (GA) for feature selection and parameter optimization, and a Support Vector Machine (SVM) for decision-making. This pipeline leverages the adaptive, event-driven processing of SNNs to improve stock market prediction and trading decisions.

Methods
For our experiments, we used historical data from the top 20 S&P 500 stocks, encompassing bull, bear, and volatile market conditions. Price data were transformed into multiple technical indicators (e.g., moving averages, RSI). A GA then optimized the indicator parameters and selected the most predictive features. Next, the time-series features were encoded into spike trains via rate coding with a fixed time window and fed into an SNN composed of Leaky Integrate-and-Fire neurons. The SNN processed the temporal patterns, and its spiking outputs were summarized (e.g., as spike counts over time). These features were then passed to an SVM for final classification of the trading action.

Results
In backtesting, the SNN-based framework surpassed the buy-and-hold strategy across multiple market regimes, demonstrating higher predictive accuracy and stronger trading returns. This performance gap was especially evident during volatile market phases, where passive buy-and-hold approaches often struggled to adapt. By capitalizing on the event-driven nature of spiking neurons, our system reacted swiftly to abrupt price swings, refining its signals in real time and thus helping to mitigate slippage and transaction costs. Overall, these findings highlight the neuromorphic framework’s resilience and effectiveness, suggesting it can outperform simpler investment strategies under diverse market conditions.

Discussion
This work demonstrates the potential of neuromorphic computing in financial decision-making. The SNN-based approach offers adaptive, event-driven processing suited to volatile markets, while its reservoir-like architecture (with only the output classifier trained) reduces computational complexity. In addition, the model exhibits robustness to noisy market data and regime shifts. However, limitations remain: the approach relies on a predefined rate-coding scheme, and the hybrid design combining a spiking network with an external classifier is not end-to-end. Future research can explore improved encoding methods and end-to-end spiking models, as well as deployment on neuromorphic hardware for faster, energy-efficient execution.





Figure 1. Flowchart: Stock data and optimized technical indicators are converted into spike trains for the spiking neural network (SNN), whose outputs feed into a classifier for trading decisions
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
[1] Maass, W. (1997). Networks of Spiking Neurons: The Third Generation of Neural Network Models.Neural Networks, 10(9), 1659–1671. https://doi.org/10.1016/S0893-6080(97)00011-7
[2] Holland, J. H. (1992). Genetic algorithms.Scientific American, 267(1), 66–73.http://www.jstor.org/stable/24939139
[3] Lin, X., Yang, Z., & Song, Y. (2011). Intelligent stock trading system based on improved technical analysis and Echo State Network.Expert Systems with Applications, 38(9), 11347–11354. https://doi.org/10.1016/j.eswa.2011.03.001


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P165: Incorporation of Neuromodulation into Predictive-Coding Spiking Neural Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P165 Incorporation of Neuromodulation into Predictive-Coding Spiking Neural Networks

Yelim Lee1, Dongmyeong Lee1, Hae-Jeong Park*1,2,3,4


1Department of Nuclear Medicine, Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul, Republic of Korea
2 Department of Nuclear Medicine, Severance Hospital,Seoul, Republic of Korea
3Department of Cognitive Science, Yonsei University, Seoul, Republic of Korea
4Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea


*Email: parkhj@yonsei.ac.kr

Introduction

Neuromodulation is often considered to enhance the selection mechanism in the brains of living organisms, prioritizing processing of inputs relevant to their goals. By modifying effective synaptic strength and altering firing properties, neuromodulators engage various cellular mechanisms, leading to a dynamic reconfiguration of neural circuits. Adjusting a target neuron’s excitability is one mechanism for enabling attentional effects. This research explores how this mechanism enhances predictive coding and learning in a spiking neural network (SNN) with two-compartment neurons, focusing on classification ability and internal representations in hidden layers with top-down signals.

Methods
The network includes one input layer, one output layer, and three fully connected hidden layers with feedback and feedforward connections. The dynamics of the hidden neurons are based on the Adaptive Leaky-Integrate-and-Fire (ALIF) model from Zhang and Bohte's previous work. The dendritic compartment of the hidden neurons integrate inputs from higher regions, and the somatic compartment integrates input from lower areas. To implement the neuromodulation effect on the hidden neurons, we introduced a new top-down attention connection from the higher layer to the lower layer. This adjustment enables modifying the target neuron’s excitability by dynamically altering the baseline firing threshold. We used a spiking MNIST image as input data, modifying the original MNIST dataset to provide spiking input over time. Additionally, we created multiple variations of the MNIST dataset, introducing noise or making occluded or overlapped images, to provide the ambiguous context.
Results
We performed image classification tasks with the MNIST dataset, achieving a high accuracy for the original set and highly noisy test data set. We analyzed the uncertainty of output neurons by tracking their membrane potential for each digit class, noting increased firing for the correct class despite initial uncertainty. To assess predictive coding, we evaluated each hidden layer's internal representation by decoding the spiking activity. This involved no inputs or half-occluded inputs while clamping the output neuron’s membrane potential to a specific class. The results showed successful digit representation in spiking activities, especially with applied modulation weights, compared to the previous model.
Discussion
Clarifying important information in uncertain contexts improves with appropriate attention and prediction. This study suggests that neuromodulation enhances hierarchical encoding and learning in SNN during ambiguous scenarios. The model maintained high classification accuracy even in noisy and occluded conditions, and the internal representation, along with reduced uncertainty of output neurons, aligns with predictive coding principles, where top-down modulation refines internal representations.



Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NO. 2023R1A2C200621711)
References
1.Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1), 193-222.
2.Marder, E. (2012). Neuromodulation of neuronal circuits: back to the future. Neuron, 76(1), 1-11.
3.Thiele, A., & Bellgrove, M. A. (2018). Neuromodulation of attention. Neuron, 97(4), 769-785.
4.Zhang, M., & Bohte, S. M. (2024). Energy Optimization Induces Predictive-coding Properties in a Multicompartment Spiking Neural Network Model. bioRxiv, 2024-01.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P166: Leveraging neural modeling of channelopathies to elucidate neural mechanisms underlying neurodevelopmental disorders
Monday July 7, 2025 16:20 - 18:20 CEST
P166 Leveraging neural modeling of channelopathies to elucidate neural mechanisms underlying neurodevelopmental disorders

Molly Leitner*1, Roman Baravalle1,James Chen1,Timothy Fenton3, Roy Ben-Shalom3, Salvador Dura-Bernal1,2


1Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
2Center for Biomedical Imaging & Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA
3Neurology Department, University of California Davis, Davis, CA, USA

*Email: molly.leitner@downstate.edu
Introduction

Neurodevelopmental disorders (NDDs), such as epilepsy, autism spectrum disorder, and developmental delays, present with considerable clinical variability and often impair social interactions, speech, and cognitive development. A key feature of these disorders is an imbalance in excitatory/inhibitory (E/I) input, which disrupts neuronal circuit function during development. Brain channelopathies, where neuronal ion channel activity is altered, provide an ideal model for studying E/I imbalance, as their effects can be directly linked to neuronal excitability. Ion channels are crucial in generating electrical activity in neurons, and disruptions to this activity are strongly associated with NDDs [1].

Methods
Studying channelopathies at the single-cell level is well-established, however, investigating the impact of specific channel mutations on neuronal circuits requires more complex approaches.By utilizing a previously developed primary motor cortex (M1) model built using NetPyNE and NEURON, we employ large-scale, highly detailed biophysical neuronal simulations to examine how channel mutations influence individual and network neuronal activity [2].


Results
These simulations offer a mechanistic understanding of how channelopathies contribute to E/I imbalance and the pathology of NDDs. Through the M1 cortical column simulation, we measure the effects of biophysical changes in ion channels on network excitability and neuronal firing patterns, providing insights into the pathophysiology of simulated channelopathies.
Discussion
This model not only serves as a tool for investigating specific channelopathy cases but also enables the exploration of pharmacological agents aimed at restoring E/I balance. Ultimately, this approach will enhance our understanding of targeted therapeutic strategies for alleviating disease symptoms and may uncover novel treatments with clinical potential.




Acknowledgements
This work was supported by the Hartwell Foundation through an Individual Biomedical Research Award. The authors gratefully acknowledge the foundation’s commitment to innovative pediatric research and its generous support of our project.
References
● Spratt PWE, Ben-Shalom R, Keeshen CM, Burke KJ Jr, Clarkson RL, Sanders SJ, Bender KJ. The Autism-Associated Gene Scn2a Contributes to Dendritic Excitability and Synaptic Function in the Prefrontal Cortex. Neuron. 2019 Aug 21;103(4):673-685.e5. doi: 10.1016/j.neuron.2019.05.037.
● Dura-Bernal S, Neymotin SA, Suter BA, Dacre J, Moreira JVS, Urdapilleta E, Schiemann J, Duguid I, Shepherd GMG, Lytton WW. Multiscale model of primary motor cortex circuits predicts in vivo cell-type-specific, behavioral state-dependent dynamics. Cell Rep. 2023 Jun 27;42(6):112574. doi: 10.1016/j.celrep.2023.112574.








Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P167: Active NMDARs expand input rate sensitivity into high-conductance states
Monday July 7, 2025 16:20 - 18:20 CEST
P167 Active NMDARs expand input rate sensitivity into high-conductance states

Movitz Lenninger*1, Pawel Herman1, Mikael Skoglund1, Arvind Kumar1

1 School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
*Email: movitzle@kth.se

Introduction

A single cell has thousands of synapses distributed across the surface, predominantly along the dendritic tree [1]. Thus, in an active state, thousands of inputs can target a single cell leading to what is known as the high conductance state [2]. During such states, both input resistance and the effective membrane time constant are markedly reduced [3]. Paradoxically, high-conductance states can also lead to a reduction of postsynaptic activity [4,5]. Here, we show, using single-cell simulations of thick-tuft layer 5 (TTL5) pyramidal cells, that the voltage dependence of NMDA receptors (NMDARs), a ubiquitous feature in the brain, can increase excitability in high-conductance states – providing sensitivity to a larger range of inputs.


Methods
We simulated a previously published reconstructed morphology of a rat TTL5 pyramidal cell [6]. We randomly distribute 5000 excitatory and 2500 inhibitory synapses uniformly according to the membrane surface areas of the dendritic segments (Figure 1a). Inputs are sampled from independent Poisson processes. In all cases, we optimize the inhibitory input rate to keep the somatic potential fluctuating around -60 mV. To study the role of active NMDARs, we consider three scenarios: synapses contain (1) only AMPA receptors (AMPARs), (2) both AMPARs and active NMDARs, and (3) both AMPARs and passive NMDARs. Unless otherwise stated, we use an NMDA-AMPA ratio of 1.6. In all cases, the integrated conductance per input is normalized to ~5.9 nS∙ms.

Results
First, we compare the input resistances across three input conditions. The input resistance decreases with increasing inputs for all three synaptic types but is consistently larger with active NMDARs (Figure 1b). Second, we compare the output firing rates (FRs) across a large range of inputs. For low and intermediate inputs, the output FRs are similar across all synaptic types (Figure 1c). However, for high inputs, output gain is only maintained with active NMDARs. Furthermore, the coefficient of variation of the interspike intervals is typically higher for active NMDARs, indicating more irregular firing (Figure 1d). Third, varying the NMDA-AMPA ratio reveals that this is a graded property of active NMDARs (Figure 1e-f).

Discussion
A key property of dendrites is to integrate pre-synaptic inputs. Active conductance can significantly alter the summation compared to passive dendrites [1]. Previous studies have, for example, linked active NMDARs to increased sequence discrimination [7] and increased coupling between tuft and soma [8]. Our work suggests active NMDARs might also be crucial for maintaining large postsynaptic activity under high input conditions, expanding the range of input sensitivity. Our work does not exclude the possibility of intrinsic voltage-gated ion channels further contributing to increased excitability under presynaptic activity [9]. It remains to study the possible interaction of such intrinsicconductanceswith active NMDRs.




Figure 1. a) Morphology of cell with 500 randomly distributed synapses. b) Estimated input resistances during three input conditions. c) Input-output transfer function of firing rates (lines). Shaded areas show the standard deviation (across bins of 1 second). d) CVs of the ISIs. Color codes in panels c-d) same as in b). e-f) Output firing rates and CVs for a range of NMDA-AMPA ratios with active NMDARs.
AcknowledgementsN/A
References
[1]https://doi.org/10.1146/annurev.neuro.28.061604.135703
[2]https://doi.org/10.1038/nrn1198
[3]https://doi.org/10.1073/pnas.88.24.11569
[4]https://doi.org/10.1523/JNEUROSCI.3349-03.2004
[5]https://doi.org/10.1103/PhysRevX.12.011044
[6]https://doi.org/10.1371/journal.pcbi.1002107
[7]https://doi.org/10.1126/science.1189664
[8]https://doi.org/10.1038/nn.3646
[9]https://doi.org/10.1016/J.NEURON.2020.04.001








Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P168: Evaluating Effective Connectivity and Control Theory to Understand rTMS-Induced Network Effects
Monday July 7, 2025 16:20 - 18:20 CEST
P168 Evaluating Effective Connectivity and Control Theory to Understand rTMS-Induced Network Effects

Riccardo Leone*1,2,3, Michele Allegra4, Xenia Kobeleva1,2
1 Computational Neurology Group, Ruhr University Bochum, 44801, Bochum, Germany.
2 Faculty of Medicine, University of Bonn, 53127, Bonn, Germany.
3 German Center for Neurodegenerative Diseases (DZNE), 53127, Bonn, Germany.
4 Padova Neuroscience Center, University of Padova,35129,Padova, Italy

* Email: riccardoleone1991@gmail.com
Introduction

Computational neuroscience might contribute to a better understanding of neurostimulation by modeling its effects on brain networks. Effective connectivity (EC) and EC-based network control theory could provide a theory-driven framework for elucidating neurostimulation-induced network effects [1]. We thus tested whether EC and control energy could explain changes in resting-state fMRI (rs-fMRI) metrics induced by repetitive transcranial magnetic stimulation (rTMS). We hypothesized that EC and control energy would outperform functional connectivity (FC) and structural connectivity (SC) in explaining rTMS effects.


Methods
Twenty-one subjects received inhibitory 1Hz rTMS (20 min) at frontal, occipital or temporo-parietal sites, with rs-fMRI acquired pre- and post-stimulation. Whole-brain EC was estimated using regression Dynamic Causal Modeling. Control energy from the stimulated node (i.e., driver node) to each downstream target node was computed from the EC model. We quantified rTMS effects at a node level as pre- vs post- changes in: i) FC with the driver region, ii) amplitude of low-frequency fluctuations (ALFF), and iii) nodal FC strength with the whole brain. We correlated these changes with a series of pre-stimulus predictors: SC, FC, EC between each target and the driver node, and energy needed to control each target from the driver node.

Results
rTMS generally reduced whole-brain FC with each stimulated driver node, as well as ALFF, and nodal FC strength, with frontal stimulation yielding more widespread effects. EC and control energy showed significant correlations with the change in FC with the driver node and nodal FC strength. Nonetheless, significant associations of similar or greater magnitude were also observed with simple FC, thus failing to demonstrate a clear advantage of EC and EC-based control energy to evaluate rTMS-induced effects. Changes in ALFF were not significantly correlated with any pre-TMS variable.
Discussion
Contrary to our main hypothesis, EC and EC-based control energy did not provide significantly better explanations of 1Hz rTMS-induced changes compared to model-agnostic FC. Our results question the current utility of EC and EC-based control theory models for understanding the effects of 1-Hz rTMS on brain networks. Given the complex interplay of neurobiological processes induced by rTMS that are not directly linked to the network spread of TMS pulses (e.g., synaptic plasticity), future work should implement EC and EC-based control energy to explain the effects of simpler protocols of neurostimulation.




Acknowledgements

References
1. Manjunatha KKH, Baron G, Benozzo D, Silvestri E, Corbetta M, Chiuso A, et al. (2024) Controlling target brain regions by optimal selection of input nodes.PLoS Comput Biol20(1): e1011274. https://doi.org/10.1371/journal.pcbi.1011274
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P169: Anesthesia modulates the system-wide contributions of identified head neurons in C. elegans
Monday July 7, 2025 16:20 - 18:20 CEST
P169 Anesthesia modulates the system-wide contributions of identified head neurons in C. elegans

Avraham Lepsky*¹, Andrew Chang², Chris Connor³, Chris Gabel³


¹ Graduate Program for Neuroscience, Boston University, Boston, United States
² Graduate Program in Physiology, Boston University, Boston, United States
³ Department of Biophysics, Boston University, Boston, United States


*Email: avil@bu.edu
Introduction

While anesthesia has similar effects in the brains of animals ranging from the nematode C. elegans to humans, the mechanism by which various anesthetic agents work remains largely unknown. C. elegans has been identified as a tractable model for studying anesthesia due to progressing behavioral deficits from increased anesthesia concentration, genetic susceptibility analogous to mammals, and an annotated neuroconnectome [1]. Isoflurane is a volatile anesthetic that induces general anesthesia; previous work has found that isoflurane anesthesia in C. elegans caused marked dyssynchrony of neuron dynamics (as measured by a decrease in the cumulative variance explained by the top 3 principal components in neuronal activity) [2].

Methods
We employed C. elegans worms expressing the NeuroPAL transgene, providing a fluorescent color map for identification of neurons within the known connectome of the C. elegans nervous system [3]. Using light sheet microscopy performed by a dual inverted selective plane illumination microscope (DISPIM), we measured activity of 120 individual head neurons of the NeuroPAL worms via fluorescence imaging of the calcium sensitive GCaMP reporter. We imaged for 20 minutes at 2Hz. We performed principal component analysis (PCA) on the measured 120 neuron activity dynamics, following previous attempts at ascribing a neural manifold to C. elegans behavior [4] with the added information of neuron identification.
Results
Analysis of neuronal activity across worms at various isoflurane levels identified 10 neurons with a statistically significant change in PCA magnitude between 0 and 2% isoflurane and 17 neurons between 0 and 4%. No obvious receptor or functional identity marker was shared by all statistically significant neurons.
Discussion
We identified a list of neurons whose contributions to the system’s activity are most significantly modulated by changing isoflurane concentration. Because the connectome of C. elegans has been established, the anatomical properties of the neurons can be compared to their functional properties to establish a mechanistic understanding of the systemic changes induced by isoflurane. Connectomic spiking neuron models and other biophysical models can then be used to make predictions linking the molecular and behavioral properties of anesthetic agents.




Acknowledgements
Thank you to the Graduate Program of Neuroscience, under the direction of Dr. Shelly J. Russek and Sandi Grasso, for providing such a nurturing community.
Funding was generously awarded through a T32 grant.
References

Rajaram, S., … & Morgan, P. G. (1999). A stomatin and a degenerin interact to control anesthetic sensitivity in Caenorhabditis elegans. Genetics, 153(4), 1673–1682.

Awal, M. R., … & Connor, C. W. (2020). The collapse of global neuronal states in C. elegans under isoflurane anesthesia. Anesthesiology, 133(1), 133.

Yemini, E., ... & Hobert, O. (2021). NeuroPAL: a multicolor atlas for whole-brain neuronal identification in C. elegans. Cell, 184(1), 272-288.

Kato, S., ... & Zimmer, M. (2015). Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Cell, 163(3), 656-669.


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P170: Competition between symmetric and antisymmetric connections in cortical networks
Monday July 7, 2025 16:20 - 18:20 CEST
P170 Competition between symmetric and antisymmetric connections in cortical networks

Dong Li*1,Claus C. Hilgetag1,2

1Institut für Computational Neuroscience, Universitätsklinikum Hamburg-Eppendorf (UKE), 20246 Hamburg, Germany

2Department of Health Sciences, Boston University, 02215 Boston, USA


*Email: d.li@uke.de

Introduction

The pairwise correlation of neural activity directly and significantly influences neural network performance across various cognitive tasks [1, 2]. While tasks such as working memory require low correlation levels [3], others, like motor actions, rely on higher correlation levels [1]. These correlation patterns are highly sensitive to network structure and neural plasticity [4-6]. However, understanding how neural networks dynamically balance tasks with differing correlation demands, and how distinct brain networks are structurally optimized for specific functions remains a major challenge.


Methods
We simulate linear and spiking models to investigate the impact of symmetric and antisymmetric connections on neural network dynamics. The linear model, equipped with a control parameter that adjusts the relative intensity of these connections, captures fundamental mechanisms, which shape pairwise correlations and influence network performance across cognitive tasks. To quantify the competition between symmetric and antisymmetric connections, we introduce two indices from global and local perspectives. Using these indices, we further examine how synaptic plasticity modulates the relative intensity of these connections. Finally, we employ the spiking model to explore how bio-plausible neural networks implement this competition.
Results
Antisymmetric connections naturally reduce pairwise correlations, facilitating cognitive tasks that require maximal information processing, such as working memory. In contrast, symmetric connections enhance pairwise correlations, supporting other functions, such as enabling the network to generate reliable responses to external inputs. The competition between antisymmetric and symmetric connections can be easily modulated by spike-timing-dependent plasticity (STDP) with antisymmetric and symmetric kernels, respectively. In bio-plausible networks, this competition is particularly shaped by the structured, non-random organization of excitatory and inhibitory connections.
Discussion
Every connection matrix can be decomposed into symmetric and antisymmetric components with varying relative intensities. This work reveals how the competition between these components modulates neural correlations and facilitates distinct functions. Temporally, this competition is dynamically regulated by synaptic plasticity. Spatially, in comparison to indirect experimental evidence, our analysis actually also allows the discussion of the layer-specific distribution of these relative intensities. These findings provide a new perspective on how brain functions are segregated across both time and space.




Acknowledgements
This work was in part founded by DFGTRR-169 (A2) andSFB 936 (A1/Z3).
References
[1]https://doi.org/10.1038/nrn1888
[2]Von Der Malsburg, C. (1994). The correlation theory of brain function. In Models of neural networks: Temporal aspects of coding and information processing in biological systems (pp. 95-119). New York, NY: Springer New York.
[3]https://doi.org/10.1016/j.dcn.2025.101541
[4]https://doi.org/10.1016/0893-6080(94)00108-X
[5]https://doi.org/10.1126/science.1211095
[6]https://doi.org/10.1162/NECO_a_00451
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P171: Partial Information Decomposition of amplitude and phase stimulus encoding in oscillator models
Monday July 7, 2025 16:20 - 18:20 CEST
P171 Partial Information Decomposition of amplitude and phase stimulus encoding in oscillator models

V. LIMA¹*, D. MARINAZZO², A. BROVELLI¹,

1. Institut de neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
2. Faculty of Psychology and Educational Sciences, Department of Data Analysis, University of Ghent, Ghent, Belgium


* vinicius.lima.cordeiro@gmail.com

Introduction

Synchrony between oscillatory activity is thought to be the primary mechanism enabling widespread cortical regions to route information [1]. Such a mechanism would require either oscillations between a target and a sending area to increase their amplitude while maintaining a stable phase relationship or to shift their phase difference when a stimulus is presented [2]. Nonetheless, whether the “communication” established between the pair of areas can be used to encode stimulus-specific information remains unclear.

Methods
To address this question, we construct a whole-brain model in which nodes are connected using macaque structural connectivity [3], and their dynamics are governed by the Stuart-Landau (SL) model [4]. The SL model describes nonlinear oscillators near a Hopf bifurcation and models both the evolution of their amplitude and phase terms. In addition to enabling the characterization of interactions in terms of phase and/or amplitude, the distance to the Hopf bifurcation is controlled by a single parameter,a, which determines the stability of the oscillations:a< 0 leads to transient oscillations, whereasa≥ 0 results in stable oscillations, allowing to explore the role of both types of activity in stimulus encoding [5].To disentangle phase and amplitude encoding in the model, we use the framework of partial information decomposition (PID) [6] to estimate the information that the phase and amplitude components of simulated neuronal activity uniquely carry about the stimulus. Briefly, for two nodes indexed byjandk, we consider the product of their amplitude termsAjk, phase differencejk, and stimulusS. The three-variable PID allows us to decompose their total mutual informationI(S;Ajk,jk)into terms representing how they encode the stimulus redundantly or synergistically, as well as the unique information contained in the amplitude and phase interactions. Additionally, this framework could be extended to study non-dyadic interactions by operating at edge rather than node level [7]. In this case, PID decomposition is performed between two edge time series, each given byEjk=Ajkejk, allowing to either decompose the following mutual information terms:I(S;Ajk,Aml),I(S;Ajk,ml), andI(S;jk,ml)or performing multivariate PID.


Results


In the whole-brain model, we found that even though the stimulus is generally encoded by the signals’ amplitude, in areas that are hierarchically far apart, the initial encoding in amplitude is later found in the phase relation between the two areas in a weaker but more persistent form. These effects highly depend on the nodes’ dynamics and are most favorable when they exhibit transient oscillations (a < 0).



Discussion
Introducing a scaling of the natural oscillation frequency also appeared to enhance the effect, suggesting that different time scales across the cortex may promote the establishment of functional coupling through phase synchrony [8].



Acknowledgements
None
References

doi.org/10.1016/j.neuron.2015.09.034




10.1016/j.neuron.2023.03.015




10.1093/cercor/bhs270



10.1038/s41598-024-53105-0





10.1016/j.tics.2024.09.013Get rights and content




arxiv.org/abs/1004.2515





10.1038/s41593-020-00719-y



10.1073/pnas.1402773111


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P172: Scalable Computational Modeling of Neuron-Astrocyte Interactions in NEST
Monday July 7, 2025 16:20 - 18:20 CEST
P172 Scalable Computational Modeling of Neuron-Astrocyte Interactions in NEST

Marja-Leena Linne*1, Han-Jia Jiang2,3, Jugoslava Aćimović1, Tiina Manninen1, Iiro Ahokainen1, Jonas Stapmanns2,4, Mikko Lehtimäki1, Markus Diesmann2,4,5, Sacha J. van Albada2,3, Hans Ekkehard Plesser2,6,7
1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
2Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
3Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
4Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
5Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
6Department of Data Science, Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
7Käte Hamburger Kolleg: Cultures of Research (c:o/re), RWTH Aachen University, Aachen, Germany

*Email: marja-leena.linne@tuni.fi
Introduction

Astrocytes play a key role in modulating synaptic activity and network dynamics, yet large-scale models incorporating neuron-astrocyte interactions remain scarce [1]. This study introduces a novel NEST-based [2] simulation framework to model tripartite connectivity, where astrocytes interact with both presynaptic and postsynaptic neurons, extending traditional binary synaptic architectures. By integrating astrocytic calcium signaling and astrocyte-induced synaptic currents (SICs), the model enables dynamic modulation of neuronal activity, offering insights into the role of astrocytes in neural computation.
Methods
Our implementation integrates astrocytic calcium dynamics and SICs within a scalable, parameterized framework. The model allows controlled modulation of astrocytic influence, capturing transitions between asynchronous and synchronized neuronal states. Simulation scalability was assessed through strong and weak scaling benchmarks, leveraging parallel computing for network performance evaluation. Strong scaling benchmarks tested performance under fixed model size while increasing computing resources. Weak scaling benchmarks examined proportional upscaling of model size and computational power. These benchmarks evaluated network connection times, state propagation efficiency, and computational cost across different neuron-astrocyte configurations.
Results
Benchmark results show efficient parallel execution of the reference implementation [3]. Strong scaling benchmarks show that increasing computing resources reduces network connection and state propagation times. Weak scaling benchmarks reveal a moderate increase in communication time for processes like spike delivery and SIC delivery, yet overall performance remains robust against changes in model size and connectivity scheme. In this study, we validate the framework’s scalability to at least 1 million cells through benchmarking experiments, leveraging distributed computing for efficient simulation of large-scale neuron-glia networks.
Discussion
By providing a computationally accessible and reproducible tool for studying neuron-astrocyte interactions, this framework sets the stage for investigating glial contributions to synaptic modulation, network coordination, and their roles in neurological disorders. The integration of tripartite connectivity into NEST offers a versatile platform for modeling astrocytic regulation of neural circuits, advancing both fundamental neuroscience and applied computational modeling.



Acknowledgements
EU Horizon 2020 No. 945539 (Human Brain Project SGA3) to SJvA and M-LL. SGA3 Partnering Project (AstroNeuronNets) to JA and SJvA. EU Horizon Europe No. 101147319 (EBRAINS 2.0 Project) to SJvA and M-LL. HiRSE PS to SJvA. Research Council of Finland, Nos. 326494, 326495, 345280, and 355256, to TM, and 297893 and 318879 to M-LL. BMBF No. 01UK2104 (KHK c:o/re) to HEP.
References
[1]Manninen, T., Aćimović, J., Linne, M.-L. (2023). Analysis of Network Models with Neuron-Astrocyte Interactions. Neuroinformatics, 21(2), 375-406.https://doi.org/10.1007/s12021-023-09622-w
[2]Graber, S., Mitchell, J., Kurth, A.C., Terhorst, D., Skaar, J.E.W., Schöfmann, C.M., et al.(2024). NEST 3.8.https://zenodo.org/records/12624784

[3]Jiang, H.-J.,Aćimović, J., Manninen, T., Ahokainen, I., Stapmanns, J., Lehtimäki, M., et al.(2024). Modeling neuron-astrocyte interactions in neural networks using distributed simulation. bioRxiv.https://doi.org/10.1101/2024.11.11.622953


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P173: Large-scale Neural Network Model of the Human Cerebral Cortex Based on High-spatial-resolution Imaging Data
Monday July 7, 2025 16:20 - 18:20 CEST
P173 Large-scale Neural Network Model of the Human Cerebral Cortex Based on High-spatial-resolution Imaging Data

Chang Liu1, Dahui Wang*1, Yuxiu Shao*1

1School of Systems Science, Beijing Normal University, Beijing, China
*Email: wangdh@bnu.edu.cn (DW); shaoyx@bnu.edu.cn (YS)
Introduction

Large-scale brain models have received interest for their ability to probe complex dynamical phenomena. However, large-scale models guided by high-spatial-resolution imaging data remain largely underexplored. We develop a comprehensive large-scale model of the human cerebral cortex, utilizing the recently released data on receptor density[1] and white-matter connectivity[2]. Furthermore, we refine undirected white-matter connectivity into directed connectivity using tracer data[3]. Our model replicates the characteristic spatio-temporal patterns of whole-brain activities observed experimentally during the resting state, enabling a deeper exploration of the interplay between anatomical structure, dynamics, and potential functional roles.
Methods
Our network comprises about 60k vertices, each modeled as a microcircuit of coupled excitatory and inhibitory populations connected via AMPA, NMDA, and GABA synapses, exhibiting Wilson-Cowan type dynamics[4]. Intra-vertex connection strengths are proportional to receptor density[1]. Inter-vertex connections are derived from anatomical fiber data, obtained via dMRI tractography at vertex resolution[2], and averaged across 255 unrelated healthy individuals (Fig. 1A). Since this anatomical data is undirected, we redistribute fiber bundles between vertices using directed macaque neocortex tracer data[3].
Results
The simulation results demonstrate that the averaged firing rate (FR) across all vertices is around 3Hz[5]. Interconnected vertices show reduced correlation between FR and the excitatory-inhibitory receptor density ratio compared to independent vertices (Fig. 1B). Beta-band peak frequency exhibits a posterior-anterior gradient, which is disrupted by shuffling the spatial distribution of the AMPA-NMDA receptor ratio (Fig. 1C). The projection of power spectral density and FR onto the first principal component positively correlate with T1w/T2w (Fig. 1D, 1E). These findings align with experimental observations[6–8]. Moreover, asymmetric connectivity induces traveling waves, with sinks exhibiting higher FR than surrounding vertices (Fig. 1F).
Discussion
Our large-scale brain model with high-spatial-resolution not only introduces a novel approach to understanding the computational mechanisms of the brain but also offers critical insights into the neural dynamic mechanisms underlying cognitive dysfunction and mental disorders. However, our model still has some limitations: we directly assume synaptic strength is proportional to receptor density; estimate the directed-weighted connections based on coarse matching of macaque-human brain areas; and omit signal propagation delays. Future work will focus on simulating the information transmission across the cortex, exploring how this model can enhance our understanding of brain function and support the development of therapeutic strategies.



Figure 1. Fig 1: (A) Schematic. (B) Relationship between mean FR and E:I ratio. Blue: independent vertices. Red: interconnected vertices. (C) Dependency between the vertex’s location along the posterior-anterior axis. Blue: original. Pink: shuffled density ratio. (D-E) Correlation of model PSD PC1 maps (D) with T1w/T2w, and model FR PC1 maps (E). (F) Sinks displaying higher FR than surrounding vertices.
Acknowledgements
This work was supported by NSFC (No.32171094 to D.W., No.32400936 to Y.S.) and National Key R&D Program of China (2019YFA0709503 to D.W.) and International Brain Research Organization Early Career Award (to Y.S.).
References
1. https://doi.org/10.1038/s41593-022-01186-3
2. https://doi.org/10.1016/j.neuroimage.2020.117695
3. https://doi.org/10.1093/cercor/bhs270
4. https://doi.org/10.1523/JNEUROSCI.3733-05.2006
5. https://doi.org/10.1023/A:1011204814320
6. https://doi.org/10.7554/eLife.53715
7. https://doi.org/10.1016/j.neuron.2019.01.017
8. https://doi.org/10.1073/pnas.1608282113
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P174: Functional brain regions analysis using single-neuron morphology-driven reservoir network
Monday July 7, 2025 16:20 - 18:20 CEST
P174 Functional brain regions analysis using single-neuron morphology-driven reservoir network

Yuze Liu, Linus Manubens-Gil*, Hanchuan Peng*

Institute for Brain and Intelligence, Southeast University, Nanjing, China

* Email: linus.ma.gi@gmail.com

* Email: h@braintell.org
Introduction

The brain operates through network topology across brain regions and morphological diversity of neurons. Reservoir computing (RC), with its recurrent nonlinearly mapping, can accomplish temporal tasks [1], enabling network functional analysis. Previous work constructed reservoir using diffusion magnetic resonance imaging (dMRI)-derived connectivity matrices and proved randomness in weight signs improves network’s memory capacity (MC) [2]. However, limitations persist due to the macroscopic scale of connectome, leaving microscale neuronal contributions underexplored. Thus, we established a reservoir using mouse brain’s single-neuron full morphology tracings, analyzing its validity in exploring the variance of functional regions by MC task.

Methods
We used structural connectivity (SC) from [3]. The connectome data is 1,774 fully reconstructed mouse neurons registered to Allen Mouse Brain Common Coordinate Framework (CCFv3) [4]. We used hyperbolic tangent as nonlinear mapping. Input signal is sampled randomly, target signal is the delayed input. We fitted output to target signal via ridge regression and quantified performance by squared Pearson correlation coefficient. We constructed small-world networks with connection density approximating SC. We selected functional brain regions, e.g., LGd (Dorsal part of the lateral geniculate complex) and visual cortex regions as input/output nodes. We adjusted spectral radius to optimize connectivity weights for enhanced memory retention.
Results
We found that 1) based on uniform random connectivity weights, biologically wired networks with input-output nodes defined by functional regions slightly outperformed the Watts-Strogatz small-world network with random input-output node in MC task, confirming that single-neuron-derived network topology is relevant for the establishment of memories in RC; 2) we observed statistically significant differences in MC task performance for different thalamocortical integration of sensory modalities across diverse spectral radii ρ (76% of tested ρ values, 19/25, 0.1 ≤ ρ ≤ 5.0, Δρ = 0.2); independent t-test and Mann-Whitney U test, p<0.05;), suggesting morphological specificity of neuronal connections may underlie functional specialization.
Discussion
This study establishes a microscale framework linking single-neuron connectome to network functionality. Future work integrating generative models for scaling up the network, spiking neuronal dynamics, and modality-specific tasks could further dissect latent determinants of regional brain function.




Acknowledgements
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 32350410413 awarded to LMG.
References
1. https://doi.org/10.3389/fams.2024.1221051
2. https://doi.org/10.1109/IJCNN60899.2024.10650803
3. https://doi.org/10.1016/j.celrep.2024.113871
4. https://doi.org/10.1038/s41586-021-03941-1

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P175: A Neurorobotic Framework for Exploring Locomotor Control Following Recovery from Thoracic Spinal Cord Injury
Monday July 7, 2025 16:20 - 18:20 CEST
P175 A Neurorobotic Framework for Exploring Locomotor Control Following Recovery from Thoracic Spinal Cord Injury

Andrew B. Lockhart*1, Huangrui Chu1, Shravan Tata Ramalingasetty1, Natalia A. Shevtsova1, David S.K. Magnuson2, Simon M. Danner1


1Department of Neurobiology and Anatomy, College of Medicine, Drexel University, Philadelphia, PA, USA

2Department of Neurological Surgery, University of Louisville, Louisville, KY, USA

*Email: abl73@drexel.edu

Introduction


Thoracic spinal cord contusion disrupts communication between the cervical and lumbar circuitry. Despite this, rats recover locomotor function, though at a reduced speed and with altered speed-dependent gait expression. Our previous computational model of spinal locomotor circuitry [2,3] reproduced the observed gait changes by linking them to impaired long propriospinal connectivity and lumbar circuitry reorganization, likely involving enhanced reliance on afferent feedback. To investigate the role of sensory feedback in locomotion and explore post-contusion reorganization, a neurorobotic model of quadrupedal locomotion was used in which the spinal circuitry was embedded in a body that interacted with the environment (Fig. 1).

Methods
We have expanded our previous neural network model of spinal locomotor circuitry to drive a simulated Unitree Go1 quadrupedal robot. The model includes four rhythm generators, one per limb, interconnected by commissural and long propriospinal neurons. Activity from each rhythm generator controls a pattern formation network that coordinates muscle activation in each limb. Hill-type muscles convert this activation into torque to actuate the motors and allow for calculation of proprioceptive feedback, which interacts with all levels of the spinal circuitry. Connection weights of proprioceptive, vestibular, and pattern forming neurons were optimized using covariance matrix adaptation evolution strategy to produce adaptive locomotion.
Results
The optimized model produces stable locomotion across a range of target speeds. Integration of muscle states and environmental information through proprioceptive, cutaneous, and vestibular neurons allows the model to traverse rough terrain consisting of variable slopes and ground friction. Preliminary simulation of thoracic contusion by reducing connection weights of inter-enlargement long propriospinal neurons results in altered gaits.

Discussion
The model provides a testbed for linking neuronal manipulations to changes in locomotion and behavior. By comparing locomotor gaits across models—those undergoing a second round of optimization post-contusion, those that have not, and experimental results from rats—we can identify and analyze critical neuronal connections involved in recovery. Using this approach, we will further investigate how circuit reorganization can contribute to locomotor recovery after thoracic spinal cord contusion.





Figure 1. Fig 1. A) The central locomotor circuit model for four limbs includes long propriospinal neurons connecting cervical and lumbar circuits adapted from Frigon 2017 [3]. B) Two-level rhythm and pattern formation circuitry for one limb. Motoneuron (MN) activity activates muscles (C) which actuate torque-controlled motors (D). Kinematics and kinetics are transformed into afferent feedback signals.
Acknowledgements
This work was supported by the National Institutes of Health (NIH) grantsR01NS112304, R01NS115900, andT32NS121768.
References
[1] Danner, S. M., et al. (2017). Computational modeling of spinal circuits controlling limb coordination and gaits in quadrupeds.eLife,6, e31050. https://doi.org/10.7554/eLife.31050
[2] Zhang, H., et al. (2022). The role of V3 neurons in speed-dependent interlimb coordination during locomotion in mice.eLife,11, e73424. https://doi.org/10.7554/eLife.73424
[3] Frigon, A. (2017). The neural control of interlimb coordination during mammalian locomotion.Journal of Neurophysiology,117(6), 2224–2241. https://doi.org/10.1152/jn.00978.2016
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P176: Plastic Arbor: a modern simulation framework for synaptic plasticity – from single synapses to networks of morphological neurons
Monday July 7, 2025 16:20 - 18:20 CEST
P176 Plastic Arbor: a modern simulation framework for synaptic plasticity – from single synapses to networks of morphological neurons

Jannik Luboeinski*1,2,3, Sebastian Schmitt1,2, Shirin Shafiee1,2, Thorsten Hater4, Fabian Bösch5, Christian Tetzlaff1,2,3

1III. Institute of Physics – Biophysics, University of Göttingen, Germany
2Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Germany
3Campus Institute Data Science (CIDAS), Göttingen, Germany
4Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany
5Swiss National Supercomputing Centre, ETH Zürich, Switzerland

*Email: jannik.luboeinski@med.uni-goettingen.de

Introduction
Arbor is a software library designed for the efficient simulation of large-scale networks of biological neurons with detailed morphological structures. It combines customizable neuronal and synaptic mechanisms with high-performance computing, enabling to use diverse backend architectures such as multi-core CPU and GPU systems [1] (also see Fig. 1a).
Synaptic plasticity processes play a vital role in cognitive functions, including learning and memory [2,3]. Recent studies have shown that intracellular molecular processes in dendrites significantly influence single-neuron dynamics [4,5]. However, for understanding how the complex interplay between dendrites and synaptic processes influences network dynamics, computational modeling is required.
Methods
To enable the modeling of large-scale networks of morphologically detailed neurons with diverse plasticity processes, we have extended the Arbor library to yield the Plastic Arbor framework, supporting simulations of a large variety of spike-driven plasticity paradigms (cf. Fig. 1b). To showcase the features of the new framework, we present examples of computational models, beginning with single-synapse dynamics [6,7], progressing to multi-synapse rules [8,9], and finally scaling up to large recurrent networks [10].
Results
While cross-validating our implementations by comparison with other simulators, we show that Arbor allows simulating plastic networks of multi-compartment neurons at nearly no additional cost in runtime compared to point-neuron simulations. Using the new framework, we have already been able to investigate the impact of dendritic structures on network dynamics across a timescale of several hours, showing a relation between the length of dendritic trees and the ability of the network to efficiently store information.
Discussion
Due to its modern computing architecture and inherent support of multi-compartment neurons, the Arbor simulator constitutes an important tool for the computational modeling of neuronal networks. By our extension of Arbor, we provide a valuable tool that will support future studies on the impact of synaptic plasticity, especially, in conjunction with neuronal morphology, in large networks. In our recent work, we also demonstrate new insights into the functional impact of morphological neuronal structure at the network level. In the future, the Plastic Arbor framework may power a great variety of studies considering synaptic mechanisms and their interactions with neuronal dynamics and morphologies, from single synapses to large networks.
Figure 1. Overview of the extended Arbor framework with support for synaptic plasticity simulations.
Acknowledgements
This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) through grants SFB1286 (C01, Z01) and TE 1172/7-1, as well as by the European Commission H2020 grants no. 899265 (ADOPD) and 945539 (HBP SGA3).
References1. https://doi.org/10.1109/EMPDP.2019.8671560
2. https://doi.org/10.1146/annurev.neuro.23.1.649
3. https://doi.org/10.1038/s41539-019-0048-y
4. https://doi.org/10.1016/j.conb.2008.08.013
5. https://doi.org/10.7554/eLife.46966
6. https://doi.org/10.1038/78829
7. https://doi.org/10.1073/pnas.1109359109
8. https://doi.org/10.1523/JNEUROSCI.0027-17.2017
9. https://doi.org/10.1371/journal.pone.0161679
10. https://doi.org/10.1038/s42003-021-01778-y
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P177: Cobrawap: from a specific use-case to a more general scientifically-technologically co-designed tool for neuroscience
Monday July 7, 2025 16:20 - 18:20 CEST
P177 Cobrawap: from a specific use-case to a more general scientifically-technologically co-designed tool for neuroscience

Cosimo Lupo1,*, Robin Gutzen2, Federico Marmoreo1, Alessandra Cardinale1,3, Michael Denker4, Pier Stanislao Paolucci1, Giulia De Bonis1


1Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
2Dept. of Psychology and Center for Data Science, New York University, New York, USA
3Università Campus Bio-Medico di Roma, Rome, Italy
4Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany


*Email:cosimo.lupo89@gmail.com
Introduction

Cobrawap (Collaborative Brain Wave Analysis Pipeline) [1-3] is an open-source, modular and customizable data analysis tool designed and implemented by INFN (Italy) and Jülich Research Centre (Germany) in the context of the Human Brain Project, further enhanced within the EBRAINS and EBRAINS-Italy initiatives. Its foundational goal was to enable standardized quantitative descriptions of cortical wave dynamics observed in heterogeneous data sources, both experimental and simulated, also allowing for validation and calibration of brain simulation models (Fig. 1). The current directions of development aim at enhancing generalizability beyond the set of originally considered use cases.

Methods
Intercepting the increasing demand by the Neuroscience community for reusability and reproducibility, Cobrawap provides a framework suitable for collecting generalized implementations of established methods and algorithms. Inspired by FAIR principles and leveraging the latest software solutions, Cobrawap is structured as a collection of modular Python3 building blocks that can be flexibly arranged along sequential stages, implementing data processing steps and analysis methods, directed by workflow managers (Snakemake or CWL). The collaborative approach behind the whole software allows users to seamlessly enrich its scope, by co-designing and implementing new processing or visualization blocks with the support of the Cobrawap “core team”.

Results
Cobrawap has been successfully appliedon murine data and data-driven simulations, for multi-scale quantitative comparisons of heterogeneous experimental datasets [4] and for validation and calibration of simulation models [5], in the specific use-case of cortical slow wave data analysis in low-consciousness brain states. Later applications on non-human primate experimental data, and on increasing levels of consciousness, have proven the robustness and the versatility of the approach, paving the way to the crucial extension toward human data. A fundamental step is represented by the comparison with simulations, e.g. via TheVirtualBrain [6,7], which allow both to benchmark the new algorithms, and to validate and calibrate such models [8,9].

Discussion
Cobrawap has proven to be effective in the analysis of both synthetic and experimental data of different origin, representing a FAIR-compliant collaborative framework for the scientific and technological co-design. Together with the appealing extension to experimental human data, in both physiological and pathological conditions, further lines of enhancement involve the analysis of the output from a variety of theoretical models, also including the outcomes of artificial neural networks; this makes it eligible for addressing the explainability of AI solutions in bio-inspired systems that incorporate the emulation of brain states as a key element for the implementation of efficient incremental learning and cognition [10,11].



Figure 1. Cobrawap offers standardized quantitative descriptions of brain wave dynamics observed in heterogeneous data sources, both experimental and simulated (top right panel), via a set of sequential stages featuring modular and flexible sets of processing and visualization blocks (bottom panel, for two different recording techniques on anesthetized mice), each easily customizable by the user.
Acknowledgements
Research co-funded by: European Union’s Horizon Europe Programme under Specific Grant Agreement No. 101147319 (EBRAINS 2.0); European Commission NextGeneration EU through Italian Grant MUR-CUP-B51E22000150006 EBRAINS-Italy PNRR.
References
[1]github.com/NeuralEnsemble/cobrawap
[2]cobrawap.readthedocs.io
[3]doi.org/10.5281/zenodo.10198748
[4] Gutzen, et al. (2024)doi.org/10.1016/j.crmeth.2023.100681
[5] Capone, De Luca, et al. (2023)doi.org/10.1038/s42003-023-04580-0
[6] Sanz Leon, et al. (2013)doi.org/10.3389/fninf.2013.00010
[7]www.thevirtualbrain.org
[8] Gaglioti, et al. (2024)doi.org/10.3390/app14020890
[9] Cardinale, Gaglioti, et al. (2025)in preparation
[10] Capone, et al. (2019)doi.org/10.1038/s41598-019-45525-0
[11] Golosio, De Luca, et al. (2021)doi.org/10.1371/journal.pcbi.1009045
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P178: Coupling brain network simulation with pharmacokinetics for Parkinson's disease: towards patient-usable digital twins
Monday July 7, 2025 16:20 - 18:20 CEST
P178 Coupling brain network simulation with pharmacokinetics for Parkinson's disease: towards patient-usable digital twins

William Lytton*127, Donald Doherty17, Adam Newton17, June Jung1, Samuel Neymotin13, Salvador Dura Bernal1, Thomas Wichmann57, Adriana Galvan57, Hong-Yuan Chu47, Yoland Smith57,Husan Abdurakhimov6, Jona Ekström6, Henrik Podéus Derelöv16, Elin Nyman6, Gunnar Cedersund6
1 Downstate Health Science University, Brooklyn NY USA
2 Kings County Hospital, Brooklyn NY USA
3 Nathan Kline Institute, Orangeburg NY USA
4 Georgetown University, Washington DC USA
5 Emory University, Atlanta GA USA
6 Linköping University, Linköping, Sweden
7 Aligning Science Across Parkinson's (ASAP) Collaborative Research Network, Chevy Chase, United States*billl@neurosim.downstate.edu

Introduction
Parkinson’s disease (PD) is characterized by complex motor deficits in multiple sites. Starting with dopamine (DA) depletion in substantia nigra, brain dysfunction subsequently occurs in primary motor cortex (M1), basal ganglia (BG) and other areas. At first, dysfunction is a direct consequence of reduced DA. Then, through the dynamics of compensation and decompensation, these other areas become themselves pathophysiological. We used simulation to explore the focal M1 pathophysiology seen in mouse models. We are now integrating pharmacokinetic (PK) models to consider how therapy (Rx) can normalize dynamics.
Methods
We adapted our NEURON/NetPyNE M1 neuronal network (NN) model to simulate PD, reducing pyramidal-tract layer 5 neuron (PT5B) excitability. We coupled a prior ODE PK model to evaluate DA, NE levels produced by L-DOPA, L-DOPS Rx, respectively, modulating parameters based on local DA,NE levels. Parameter optimizations explored PK outputs into network activity to look at 1. dose-timing; 2. gut absorption (bioavailability, gastric delays); 3. multi-compartment distribution (blood, fat, muscle, brain); 4. blood-brain barrier (BBB) crossing; 5. precursor conversion to DA, NE; 6. drug metabolism and clearance.
Results
We focused on NE since locus coeruleus (LC) degeneration directly affects M1 cells, while DA loss directly affects BG neurons. Our untreated network simulations showed elevated PT5B activity despite reduced PT5B excitability. This paradoxical firing rate increase was associated with enhanced LFP beta-band oscillatory power with beta bursts. NE Rx shifted network activity to 20-35 Hz high-beta activity with reduction in excessive beta power, partly normalizing activity.
Discussion
Our hybrid PK-NN model demonstrated related potential clinical Rx of PD with correction of the pathophysiological changes that produce motor dysfunction, thus starting to link treatment with well-being. Partial normalization of beta oscillations and firing rates with L-DOPS treatment may add to treatment outcomes. We can isolate clinically-modulatable effects including dose-timing, gut pretreatment, precursor transformation, and clearance to shape target-neuron effects. We hope to thereby improve effect/side-effect balance to reduce dyskinesias, wearing-off, and freezing. Future model iterations will extend to digital twin applications to a provide tools to assist patients in personally optimizing their own therapy.






Acknowledgements
This research was funded in part by Aligning Science Across Parkinson’s [ASAP-020572] through the Michael J. Fox Foundation for Parkinson’s Research (MJFF). For the purpose of open access, the author has applied a CC BY public copyright license to all Author Accepted Manuscripts arising from this submission.
Supported in part by STRATIF-AI funded by Horizon Europe agreement 101080875.
References
none
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P179: Introducing the Phase-Relationship Index (PRI): Transmission Delay Shapes In- and Anti-Phase Functional Connectivity in EEG Analysis and Simulation
Monday July 7, 2025 16:20 - 18:20 CEST
P179 Introducing the Phase-Relationship Index (PRI): Transmission Delay Shapes In- and Anti-Phase Functional Connectivity in EEG Analysis and Simulation

William W Lytton*1, 2,3, Andrei Dragomir4, Ahmet Omurtag5


1Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, New York
2Department of Neurology, SUNY Downstate Health Sciences University, Brooklyn, New York
3Department of Neurology, Kings County Hospital Center, Brooklyn, New York

4 Singapore Institute for Neurotechnology, National University of Singapore, Singapore

5Engineering Department, Nottingham Trent University, Nottingham, United Kingdom

*Email: billl@neurosim.downstate.edu

Introduction
Neural oscillations enable information processing via cortical network synchronization, yet EEG studies rarely examine precise phase relationships. Introducing the Phase-Relationship Index (PRI), we demonstrate in-phase clustering dominates at cortical distances <80 mm, shifting to anti-phase beyond this. Simulations of delay-coupled excitatory LeakyIntegrate-and-Fire (LIF)neurons reveal conduction delays as the mechanism underlying this distance-dependent EEG phase relationship pattern.


Methods
Analyzing 19-channel resting EEG from 31 healthy subjects [1], we computed inter-site phase clustering (ISPC/PLV) across 1–32 Hz frequencies for electrode pairs. Phase differences determined ISPC with PRI addressing the phase relationship. Cortical distances derived from MNI coordinates [2] were used for distance-dependent analyses. Simulations modeled two delay-coupled excitatory LIF neuron populations (N=200 each) with recurrent (gain G) and inter-population (gain g) connections, conduction delays (d and tau). Firing rates (analogous to EEG) underwent spectral analysis (Frequency Band Power), synchrony assessment (Order Parameter), and ISPC/PRI comparisons between simulated and empirical data.


Results
Analysis revealed PRI values predominantly near 0 or 1. For 16 Hz connections, cortical distance increased with PRI (Fig. 1A), transitioning sharply from in-phase (PRI≈0) to anti-phase (PRI≈1, mainly asymmetric long-range) at 85-120 mm (Fig. 1B-F). Simulations of LIF neuronal populations identified four dynamic regimes (Fig. 1H-K). Disconnected populations (g=0) showed irregular firing (Fig. 1H), transitioning to synchronous but un-clustered activity with increased intra-population connectivityG(Fig. 1I,L). Introducing inter-population connections (g>0) induced phase clustering (rising ISPC, Fig. 1M), switching from in-phase (small tau, Fig. 1J) to anti-phase at tau≈31 ms (Fig. 1K,N), accompanied by reduced synchrony (Fig. 1N) during the transition to anti-phase.


Discussion
Our findings link distance dependence of clustering (Fig. 1A) to delay-coupled neuronal population dynamics (Fig. 1N). Sparse inter-population connections which were sufficient to induce clustering (Fig. 1M) mirror sparse long-distance neuroanatomical connectivity [3], and derived conduction speeds (5.44-8 m/s) match myelinated axons [4]. We show, challenging prior assumptions, that zero-lag synchrony is genuine, not artefactual. PRI analysis also reveals anti-phase dominance (Fig. 1A; [5]), distinct topographies (Fig. 1E-F), and task-modulated dynamics, underscoring its biomarker potential.







Figure 1. Figure 1. Phase clustering in EEG (A-F) and simulated neurons (G-N). (A) ISPC, PRI, cortical distances. (B) z values on Argand plane for 3 electrode pairs. (C) angle(z) values. (D) PRI values. (E) Top 15 in-phase connections. (F) Top 15 anti-phase connections. (G) Schematic of populations. (H-K) Firing rate time series. (L) ISPC, FBP, Order Parameter vs G. (M) ISPC, FBP vs g. (N) ISPC, PRI vs tau.
Acknowledgements
None.
References
[1]https://doi.org/10.1038/s41598-020-69553-3
[2]https://doi.org/10.1002/brb3.2476
[3]https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001575
[4]https://doi.org/10.1371/journal.pcbi.1007004
[5]https://doi.org/10.1002/jnr.24748
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P180: DendroTweaks: An interactive approach for unraveling dendritic dynamics
Monday July 7, 2025 16:20 - 18:20 CEST
P180 DendroTweaks: An interactive approach for unraveling dendritic dynamics

Roman Makarov*1,2, Spyridon Chavlis1, Panayiota Poirazi1
1Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Greece
2Department of Biology, University of Crete, Heraklion, Greece
*Email: roman_makarov@imbb.forth.gr
Introduction

Neurons rely on the interplay between dendritic morphology and ion channels to transform synaptic inputs into somatic spikes. Detailed biophysical models with active dendrites have been instrumental in exploring this interaction but challenging to understand and validate due to numerous free parameters. We introduceDendroTweaks, a comprehensive toolbox for creating and validating single-cell neuronal models with active dendrites, bridging computational implementation with conceptual understanding.

Methods
DendroTweaksis implemented in Python and provides a high-level interface to NEURON [1] with extended functionality for single-cell modeling and data processing. The core components include: (1) algorithms for representing and refining neuronal morphologies; (2) a NMODL-to-Python converter, along with a framework for standardizing ion channel models through parameter fitting based on equations from [2]; (3) an extended implementation of the impedance-based morphology reduction approach [3] enabling continuous reduction levels; and (4) automated validation protocols for testing somatic and dendritic activity.
Results
The toolbox provides researchers with capabilities to: (1) clean and manipulate SWC morphology files; (2) convert MOD files to Python and standardize kinetics of voltage-gated ion channel models; (3) interactively distribute membrane parameters and synapses across neuronal compartments; (4) reduce detailed morphological models to simplified versions while preserving key electrophysiological properties; and (5) record activity from multiple somatic and dendritic locations to validate neuronal responses to external stimuli. The GUI provides interactive widgets and plots for parameter adjustment with real-time visual feedback (Fig. 1).
Discussion
DendroTweaksaddresses critical challenges in computational neuroscience through data cleaning and model standardization. Its interactive interface enables intuitive exploration of models, illuminating how morpho-electric properties shape dendritic computations and neuronal output. Future work will focus on multi-platform integration with other simulators to further enhance the standardization and accessibility of detailed biophysical models.




Figure 1. Figure 1. A screenshot of the web-based GUI accessed through the Chrome browser. The interface consists of a main workspace and side menus with widgets. The workspace displays interactive plots showing neural morphology, ion channel distributions and kinetics, and simulated activity.
Acknowledgements
Funded by the Horizon 2020 programme of the European Union under grant agreement No 860949. The research project was co-funded by the Stavros Niarchos Foundation (SNF) and the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the 5th Call of “Science and Society” Action Always strive for excellence – Theodoros Papazoglou” (Project Number: DENDROLEAP 28056).
References
1. Hines, M., Davison, A. P., & Muller, E. (2009). NEURON and Python. Frontiers in neuroinformatics, 3, 391. https://doi.org/10.3389/neuro.11.001.2009
2. Sterratt, D., Graham, B., Gillies, A., Einevoll, G., & Willshaw, D. (2023). Principles of computational modelling in neuroscience. Cambridge university press.
3. Amsalem, O., Eyal, G., Rogozinski, N., Gevaert, M., Kumbhar, P., Schürmann, F., & Segev, I. (2020). An efficient analytical reduction of detailed nonlinear neuron models. Nature communications, 11(1), 288. https://doi.org/10.1038/s41467-019-13932-6
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P181: The processing of auditory rhythms in the thalamocortical network throughout the development
Monday July 7, 2025 16:20 - 18:20 CEST
P181 The processing of auditory rhythms in the thalamocortical network throughout the development

Sepideh Sadat Malekjafarian*1, Maryam Ghorbani1,2, Sahar Moghimi3,Fabrice Wallois3,4


1 Electrical Engineering Department, Ferdowsi University of Mashhad, Iran
2Rayan Center for Neuroscience and Behavior, Ferdowsi University of Mashhad, Iran
3Inserm UMR1105, Groupe de Recherches sur l’Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens Cedex 80036, France
4Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens Cedex 80054, France


*Email:s.malekjafarian92@gmail.com


Introduction
In early neural development, thalamocortical network exhibits unique characteristics, especially in preterm infants whose brain is not yet fully developed. These include specific patterns of neural oscillations, which is crucial for development of cortical circuitry and formation of neural networks. Evidence suggests thatthe ability to perceive rhythm and synchronize with periodic patterns play a critical role in neurodevelopment, particularly in language, musicand social interaction. Here, we first developed a computational model of thalamocortical neural network which is capable of generating brain rhythms associated with preterm infants. Using this model, we then investigate the early development of neural response to external rhythm.

Methods
The model consists of (i) two recurrent excitatory-inhibitory neuron groups representingcortex-subplate network with adaptationand (ii) one excitatory-inhibitory group with burst representingthalamus. All parameters remain constant exceptI-E synaptic strength in cortex-subplate networkand thalamocortical connections to generate brain rhythms. We depict neurodevelopmental trajectories in our model using EEG recordings in 46 neonates (27-35 wGA) during rest and stimulation with specific auditory stimulus [1]. The same stimulus was applied to validate auditory processing.Asynchronization indexassesses the network’s alignment with stimulus oscillations.Developmental trajectories are compared between model and premature EEG recordings.


Results
Based on free parameters,the model was developed to achieve the best age matching with the age of EEGrecordings[1]. Additionally, we were able to extract two key features of premature signals, slope and burst-interburst intervals,at different ages from the model, which are consistent with experimental results.Exploiting the developmental regime that best fitted the evolution of the spontaneous neural activity,we then show how the nonlinear interaction of auditorystimuli with endogenous brain rhythms of the model can result in different responses at different ages. Our computational model can explain the mechanism underlying the process of auditory rhythms as neural synchronization to beat and meter frequencies strengthens with age.
Discussion
Our model with its free parameters can explain the age-related changes in neural response and the increasing ability of infants to process rhythms with increasing gestational age at birth previously observed in electrophysiological data. Utilizing E-I synaptic strengths and thalamocortical connections, the model can generate preterm spontaneous brain oscillations and effectively describe the neural response to auditory stimuli with different frequencies.This enables the model to explain the observation that neural synchronization to faster rhythm is present at all ages, while neural synchronization to slower, metric rhythms emerges only athigher ages.



Acknowledgements
No specific acknowledgments are applicable for this study
References
● Saadatmehr, B., Edalati, M., Wallois, F., Ghostine, G., Kongolo, G., Flaten, E., Tillmann, B., Trainor, L., & Moghimi, S., (2025). Auditory Rhythm Encoding during the last trimester of human gestation: from tracking the basic beat to tracking hierarchical nested temporal structures. The Journal of Neuroscience, 45(4), 1-10. https://doi.org/10.1523/JNEUROSCI.0398-24.2024


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P182: Computational aspects of microarousals during awakening from anesthesia
Monday July 7, 2025 16:20 - 18:20 CEST
P182 Computational aspects of microarousals during awakening from anesthesia

Arnau Manasanch*1,2, Leonardo Dalla Porta1, Melody Torao-Angosto1, Maria V. Sanchez-Vives1,3
1Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
2Facultat de Medicina I Ciències de la Salut, Universitat de Barcelona, 08036 Barcelona, Spain
3ICREA, Passeig Lluís Companys 23, 08010 Barcelona, Spain

*Email:manasanch@clinic.cat

Introduction
The study of brain states is fundamental to understanding consciousness and its neural mechanisms [1,2]. Both sleep and anesthesia provide valuable models for investigating and characterizing brain states and their transitions [3,4,5]. While extensive research has characterized Microarousals (MAs), brief wake-like periods of brain activity, in sleep [6], these remain almost unexplored during anesthesia. Emerging evidence suggests that these transient events may be modulated by an infraslow rhythm [7,8,9], influencing arousal dynamics during emergence from anesthesia. Here, we investigate the dynamics of MAs during anesthetic emergence using local field potential (LFP) recordings from anesthetized rats, shedding light on infraslow modulation of transient arousals.



Methods

To obtain long-term LFP recordings in freely moving Lister-Hooded rats (6–10 months old), electrodes were chronically implanted 600 µm deep in the cortex. EMG was recorded from the neck muscle. After post-surgical care, animals underwent five days of handling before recordings. LFPs were recorded during anesthesia induction and emergence. The protocol was the same used in [10]. Briefly,each subject received a single shot of intraperitoneal anesthesia consisting of ketamine (20-40 mg/kg) and medetomidine (0.15-0.3 mg/kg).Cortical activity was monitored from wakefulness to full emergence from anesthesia. Experiments followed Spanish and EU regulations and were approved by the Ethics Committee of the Universitat de Barcelona (287/17 P3).
Results
After remaining in the slow oscillatory state, characterized by alternating Up (high firing) and Down (silent) periods, for 2–3 hours, the brain dynamics abruptly transitioned to a state dominated by fast oscillations (~6 Hz) and wake-like microarousals. As anesthesia wore off, MAs progressively increased in duration. This transition appeared to be modulated by an infraslow oscillation during a steady state period (~0.14 Hz), which gradually slowed (reaching ~0.04 Hz) in the progression towards wakefulness. Analysis of MAs across subjects reveals a consistent trend of increasing duration of the microarousals over time, with power-law distributions observed in the duration of MAs. These distributions show an average exponent of 2.33±0.36, suggesting that microarousals exhibit characteristic scaling behavior across different subjects.


Discussion
Our findings suggest that the increasing duration of microarousals (MAs) as anesthesia progresses reflect a gradual transition toward wakefulness, with dynamics that share properties with sleep awakening.The power-law behavior in MA duration indicates a scale-invariant process, a hallmark of self-organized criticality. This model provides a new understanding of the microarchitecture of anesthesia, offering a window into controlled microarousals and the network dynamics from unconsciousness to consciousness.





Acknowledgements
The EU Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3); INFRASLOW PID2023-152918OB-I00 funded by MICIU / AEI / 10.13039/501100011033/FEDER. Co-funded by Departament de Recerca i Universitats de la Generalitat de Catalunya (AGAUR 2021-SGR-01165). IDIBAPS is funded by the CERCA program (Generalitat de Catalunya).
References
[1]https://doi.org/10.1038/nrn3084
[2]https://doi.org/10.1016/j.tins.2023.04.001
[3]https://doi.org/10.1126/science.8235588

[4]https://doi.org/10.1213/ANE.0000000000005361
[5]https://doi.org/10.1016/j.conb.2017.04.011
[6]https://doi.org/10.1016/s0987-7053(99)80016-1
[7]https://doi.org/10.1016/j.celrep.2021.109270
[8]https://doi.org/10.1016/j.neuron.2024.12.009
[9]https://doi.org/10.1038/s41593-024-01822-0
[10]https://doi.org/10.3389/fnsys.2021.609645
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P183: Biophysically detailed neuron models with genetically defined ion channels
Monday July 7, 2025 16:20 - 18:20 CEST
P183 Biophysically detailed neuron models with genetically defined ion channels

DarshanMandge*1,2, Rajnish Ranjan2, Emmanuelle Logette2, Tanguy Damart2, Aurélien Tristan Jaquier2, Lida Kanari2, Daniel Keller2, Yann Roussel2, Stijn van Dorp2, Werner Van Geit1, and Henry Markram2
1Open Brain Institute,1005,Lausanne,Switzerland
2Blue Brain Project, Écolepolytechniquefédéralede Lausanne (EPFL), Campus Biotech, 1202 Geneva, Switzerland
*Email: darshan.mandge@openbraininsititute.org
Introduction

Cortical neurons can be classified into different electrical firing types (e-types). A common approachtomodellingthesee-types involvesthecreationofdetailed electrical models (e-models) using generic ion channel currents such as transient and persistent sodium, potassium channels, and high- and low-voltage activated calcium channels[1]. While this approach accurately captures a neuron's electricalbehaviour, it does notestablisha link between specificion channels andobservedelectrophysiological properties.
Methods
We now havemade47 homomeric ion channel modelscorresponding to various potassium[2], sodium, calcium, and hyperpolarization-activated cyclic nucleotide-gated (HCN) ion channels. These genetic ion channel models were based on independent experimental data from the heterologous expression of the corresponding genes. The genetic channels along with some generic ion channelswereused in this study to construct cortical e-type modelsfromdetailed morphological reconstructions and electrophysiological datacollected inthe rat somatosensory cortex. Webuilt aPython-basedpipeline calledBluePyEModel[3] tobuild such e-models.
Results
The optimized e-modelsreproduce firing propertiesobservedinin vitrorecordings. Electrical features of the optimized e-models were found to be within 3–5 standard deviations of the corresponding mean experimental recordings.
Discussion
These biophysically detailed modelsenablea better understanding of the electrical activity in normal and pathological states of neurons.Inthefuture, wewillmake these e-models available on the Open Brain Institute (OBI)platform,https://openbraininstitute.org/. The OBIplatformprovidesa comprehensive repository of digital brain models andstandardisedcomputational modelling services to enableusers to conduct realistic brain simulations, test hypotheses, and explore the complexitiesatvarious modelling levels– Subcellular, Cellular, Circuit andSystems.



Acknowledgements
This study was supported by funding to the Blue Brain Project, a researchcenterof the Écolepolytechniquefédéralede Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.


References
1.https://doi.org/10.1016/j.patter.2023.100855
2.https://doi.org/10/ghqvg8
3.https://doi.org/10.5281/zenodo.8283490


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P184: Spontaneous oscillations and neural avalanches are linked to whisker stimulation response in the rat-barrel and thalamus circuit
Monday July 7, 2025 16:20 - 18:20 CEST
P184 Spontaneous oscillations and neural avalanches are linked to whisker stimulation response in the rat-barrel and thalamus circuit

Benedetta Mariani*1, Ramon Guevara Erra1,2, Mattia Tambaro3,4, Marta Maschietto3, Alessandro Leparulo3,5, Stefano Vassanelli3, Samir Suweis1,2
1Padova Neuroscience Center, University of Padova, Padova, Italy
2Department of Physics and Astronomy, University of Padova, Padova, Italy
3Department of Biomedical Sciences, University of Padova, Padova, Italy
4Department of Physics, University of Milano Bicocca, Milan, Italy
5Department of Neuroscience, University of Padova, Padova, Italy

*Email: benedetta.mariani@unipd.it

Introduction
The cerebral cortex operates in a state of restless activity, even in the absence of external stimuli [1,2]. Collective neuronal activities, such as neural avalanches[3] and collective oscillations[4], are also found under resting conditions, and these features have been suggested to support sensory processing and brain readiness for rapid responses [2]. However, most of these results are supported by theoretical models rather than experimental observations. The rat barrel cortex and thalamus circuit, with its somatotopic organization for processing whisker movements, provides a powerful system to explore the interplay between spontaneous and evoked activities.
Methods
To characterize the resting state circuits, we perform multi-electrode recordings in both rats' barrel cortex and thalamus through a neural probe, both during spontaneous activity and activity after controlled whisker stimulation. We decompose the LFP signals into their frequency contents through Empirical Mode Decomposition, a tool that is suited to analyze non-linear and non-stationary oscillations. We also analyze avalanches distributions by detecting events in MUAs activity and grouping them by temporal proximity. We then employ a mesoscopic firing rate model, fitted on real data [5], to understand the observed phenomomenology. It receives as input the experimental thalamic firing rate.

Results
During spontaneous activity, we find 10-15 Hz oscillations in the barrel cortex concomitantly with slow 1-4 Hz oscillations, as well as power-law distributed avalanches. The slow oscillations are also present in the thalamus, while the 10-15 Hz one is lacking. We find that the phase of the slow oscillation modulates the higher frequency amplitude, as well as avalanche occurrences. We then record neural activity during controlled whisker movements to confirm that the 10-15 Hz barrel circuit is amplified after whisker stimulation.We finally show how the thalamic-driven firing rate model can describe the entire phenomenology observed and predict the response to whisker stimulations.
Discussion

Our results show that even during spontaneous activity the rat barrel cortex displays a rich dynamical state that includes avalanches and oscillations, which are coupled through the slow oscillation. The 10-15 Hz oscillation is amplified after the whisker stimulation, suggesting that spontaneous neural activity primes the rat cortex for the whisker response. These facts are confirmed by our model, that is able to reproduce the resting state phenomenology and the amplification of oscillations after stimulation, thanks to the thalamic input to the cortex. Moreover, the barrel cortex oscillatory behavior may allow a flexible synchronization mechanism for the perception of stimuli.





Acknowledgements
Work by B.M and S.S. is supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) –(DN. 1553 11.10.2022). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References
[1] Raichle M. E. (2011),https://doi.org/10.1089/brain.2011.0019
[2]Smith, S. M., et al (2009),https://doi.org/10.1073/pnas.0905267106
[3]Beggs, J. M., & Plenz, D. (2003),https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003
[4] Singer W. (2018),https://doi.org/10.1111/ejn.13796
[5]Pinto, D. et al (1996),https://doi.org/10.1007/BF00161134




Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P185: Fitting the data and describing neural computation with interaural time differences in the human medial superior olive
Monday July 7, 2025 16:20 - 18:20 CEST
P185 Fitting the data and describing neural computation with interaural time differences in the human medial superior olive

Petr Marsalek *1, Pavel Sanda 2, Zbynek Bures 3,

1 Institute of Pathological Physiology, First Medical Faculty, Charles University in Prague, Czech Republic
2 Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic
3 College of Polytechnics, Tolsteho 16/1556, 586 01, Jihlava, Czech Republic

*Email: petr.marsalek@lf1.cuni.cz


Introduction

In the auditory nerve and the following auditory pathway, incoming sound is encoded into spike trains - series of neural action potentials. At the third neuron of the auditory pathway, spike trains of the left and right sides converge and are processed to yield sound localization information. Two different localization encoding mechanisms are employed for low and high sound frequencies in two dedicated nuclei in the brainstem: the medial and lateral superior olivary nuclei.


Methods
The model neural circuit is based on connected phenomenological neurons. Spikes in these neurons are point events, only spike times matter. The model employs concepts of the just noticeable difference read out by the neural circuit and an ideal observer with access to all the information.


Results
Building upon our previous computational model of medial superior olive (MSO), we bring analytical estimates of parameters needed to describe auditory coding in the MSO circuit. We arrive to best estimates for neuronal signaling with the use of just noticeable difference and the ideal observer concepts. We describe spike timing jitter and its role in the spike train processing. We study the dependence of sound localization precision on the sound frequency. All parameters are accompanied with detailed estimates of their values and variability.
Discussion
Intervals bounding all the parameters from lower and higher values are discussed. Most of the results are obtained by a Monte Carlo simulation of the noisy and random inputs to the model neurons. Where it is possible, analytical calculations of probabilities and curves fitting are used.



Acknowledgements
This project was in part funded by Charles University graduate students research program, acronym SVV, No. 260 519/ 2022-2024, to Petr Marsalek
div.standard { margin-bottom: 2ex; }

References
Bures, Z. (2012). Biol. Cybern., 106(2): 111-122.

Bures, Z. and Marsalek, P. (2013). Brain Res., 1536:16-26.

Sanda, P., Marsalek, P. (2012). Brain Res., 1434: 257-265.

Marsalek, P., Sanda, P., Bures, Z. (2020). An arXiv pre-print. https://arxiv.org/abs/2007.00524


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P186: Spatiotemporal dynamics of FitzHugh-Nagumo based reservoir computing networks for classification tasks
Monday July 7, 2025 16:20 - 18:20 CEST
P186 Spatiotemporal dynamics of FitzHugh-Nagumo based reservoir computing networks for classification tasks

Oleg V. Maslennikov*1, Dmitry S. Shchapin1, Vladimir I. Nekorkin1

1Department of Nonlinear Dynamics, Gaponov-Grekhov Institute of Applied Physics of the RAS, Nizhny Novgorod, Russia


*Email: olegmaov@gmail.com

Introduction

The paradigm of computation through dynamics is highly influential within thecomputationalneuroscience community, as it elucidates how interacting neural elements give rise to specific sensory, motor, and cognitive functions[1-3]. This framework's findings are also pivotal for advancements in artificial intelligence and are of particular interest from a nonlinear dynamics perspective[4]. This paradigm is primarily based on recurrent neural networks (RNNs), which, unlike feed-forward networks, do not simply map inputs to outputs but instead rely on their intrinsic dynamic state.

Methods
One influential approach for designing and training RNNs is reservoir computing (RC), which was proposed over two decades ago[5]. RC modifies only the output weights while keeping the recurrent weights fixed. RNNs are not only models for engineering applications but also fundamental tools for understanding basic cognitive functions that emerge from brain dynamics. From a dynamical systems perspective, their performance is closely related to the basic dynamic regime.An interesting approach relies on models traditional to computational neuroscience communitysuch as spiking dynamical neurons.
Results
In this study, we investigate networks composed of coupled FitzHugh-Nagumo(FHN)neurons and examine their capabilities for classification tasks. The neurons within these networks are interconnected via fixed electricalsynapses, and the output weights are trained within the reservoir computing framework. We utilize two-feature synthetic datasets for binary classification as inputs to our RNNs, where the output units read out neural activity to indicate the class. We employ several encoding schemesincluding time-to-first-spike and rate-basedto generate spiking patterns from static two-dimensional inputs and analyze how neural dynamics influence the performance of classification tasks.We show that, the nonlinear processing capabilities of FHNneuronsenable effective handling of complex signalssuch as discrimination of linearly inseparable classes.
Discussion
The integration ofFHNneurons into reservoir computing frameworks offers a powerful approach for tackling complex computational tasks. The model's inherent nonlinear dynamics, coupled with its ability to operate near criticality, enhances the performance and robustness of RC systems.Our resultshighlighted the efficiency of FHN-based reservoirs in achieving high classification accuracy while maintaining a manageable computational load. As research progresses, the application of these biologically inspired models is expected to expand across various fields, including robotics, neurophysiology, and artificial intelligence.





Acknowledgements
This work was supported by the Russian Science Foundation, grant No 23-72-10088.
References
1.Vyas, S., Golub, M. D., Sussillo, D., & Shenoy, K. V. (2020). Computation through neural population dynamics.Annual review of neuroscience,43(1), 249-275.
2.Barak, O. (2017).Current opinion in neurobiology,46, 1-6.
3.Sussillo, D. (2014).Current opinion in neurobiology,25, 156-163.
4.Ramezanian-Panahi, M., Abrevaya, G., Gagnon-Audet, J. C., Voleti, V., Rish, I., & Dumas, G. (2022).Frontiers in artificial intelligence,5, 807406.
5.Lukoševičius, M., & Jaeger, H. (2009).Computer science review,3(3), 127-149.
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P187: Climbing fiber impact on human and mice Purkinje cell spines
Monday July 7, 2025 16:20 - 18:20 CEST
P187 Climbing fiber impact on human and mice Purkinje cell spines

Stefano Masoli*1, Egidio D'Angelo1,2

1 Department of Brain and Behavioral Science, University of Pavia, Pavia, Italy
2Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: stefano.masoli@unipv.it

Introduction

Purkinje cells (PC) are one of the most complex neurons of the nervous system and can integrate multiple inputs through their dendritic tree dotted by tens of thousands of dendritic spines. Two excitatory pathways make synapses with PC spines: one is transmitted by granule cells (GrC) through their ascending axons (aa) and parallel fibers (pf), and the second one by climbing fibers (cf) originating from the inferior olive nucleus. The impact of pfs activity on PCs was studied with a multi-compartmental model [1]. It was later improved with human and mice morphologies and dendritic spines [2]. The cfs impact on PCs is still highly debated, which prompted their study using the latest PC models with the most up-to-date experimental information.
Methods


A mice and human PC models with active spines [2] were expanded with five ionic channels type and location based on immunohistochemical papers. Dendritic spines were also improved based on the latest experimental data [3]. AMPA and NMDA receptors were tuned to generate a fast paired pulse depression. The cf synapses were distributed on spines belonging to the territory between pfs and the aspiny trunks. The synaptic impact was tested with cfs alone at various frequencies and with pfs too. Because of the massive number of sections involved, the simulations were performed with 48 cores on an AMD Threadripper 7980X. The simulation environment was NEURON 8.2.4 [4] and Python 3.10.16.
Results
The models reproduced similar intrinsic and synaptic properties showed in previous PC models [1,2,5]. The stimulations were performed with burst composed of 3 spikes every 6 ms (180Hz). The number of spines required in the generation of a complex spike was estimated to be 600 in mice and 1500 in humans. With these numbers, the mouse model showed the typical complex spike shape. Instead, the human model could not generate such a response because of the three distinct trees. A distributed approach, with a single cf for each tree [6] showed results similar to the mouse. The synchronous activation of pfs and cfs showed localized calcium increase in the spines near the stimulation sites.
DiscussionValidated multi-compartmental models built in Python/NEURON can allow the exploration of behaviours that are not yet available from experimental techniques. The complex spike recorded in the mouse model matched multiple published papers. Instead, human recordings of this response is not yet viable. The model showed that the calcium in each separate trunk required a single cf to generate correct responses.This was proposed by a recent paper [6] and with a single cf terminal for each main trunk, the simulations showed results in line with the mouse model. Activation of multiple cfs at the same, on the same human morphology, in connection with the burst pause behavior, can generate an extensive parameter space.



Acknowledgements
This project/research received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Framework Partnership Agreement No. 650003 (HBP FPA).


References

1.https://www.doi.org/10.3389/fncel.2017.00278
2.https://www.doi.org/10.1038/s42003-023-05689-y
3.https://www.doi.org/10.1101/2024.09.09.612113
4.https://www.doi.org/10.3389/neuro.11.001.2009
5.https://www.doi.org/10.3389/fncel.2015.00047
6.https://www.doi.org/10.1126/science.adi1024


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P188: Neural coding of subthreshold sinusoidal inputs into symbolic temporal spike patterns
Monday July 7, 2025 16:20 - 18:20 CEST
P188 Neural coding of subthreshold sinusoidal inputs into symbolic temporal spike patterns


Maria Masoliver1,Cristina Masoller*1
1Departmento de Física, Universitat Politecnica de Catalunya, Terrassa, Spain
*Email: cristina.masoller@upc.edu


Introduction

Neuromorphic photonics is a new paradigm for optical computing that can revolutionize the fields of signal processing and artificial intelligence.To develop photonic neurons able to process information as sensory neurons do, we need to identify excitable lasers able to emit pulses of light (optical spikes) that similar to neuronal spikes, and implement in these lasers the neural coding mechanisms used by neural systems to process information, in particular, the neural coding mechanisms used to process weak external inputs in noisy environments.
Methods
We use thestochastic FitzHugh Nagumo model to simulate spike sequences fired in response to weak (subthreshold) sinusoidal signals.We also use this model to simulate the activity of a population of neurons, when they all perceive the same subthreshold sinusoidal input. We use a symbolic time series analysis method, known as ordinal analysis [1], to analyze the sequences of inter-spike-intervals.

Results

In the analysis of the spikes of single neurons, we found that the probabilities of the symbols (ordinal patterns) encode information of the signal, because they depend on both, the amplitude and the frequency of the signal.
In the analysis of the spike generated by a population of neurons, we have also found that the ordinal probabilities encode information of the amplitude and of the period of the signal that is perceived by the neurons. We found that neuronal coupling benefits signal encoding because groups of neurons are able to encode a small-amplitude signal that cannot be encoded when it is perceived by just one or two neurons. Interestingly, we found that for a population of neurons, just a few random links between them can significantly improve signal encoding.
Discussion

We have found that the probabilities of spike patterns in spike sequences may encode information of a weak (subthreshold) input perceived by the neurons.
An open question is whether this coding mechanism can be implemented in excitable lasers that emit pulses of light (optical spikes) whose statistical properties are similar to neuronal spikes.Using ordinal analysis and machine learning, we have found that the sequences of optical spikes emitted by a laser diode in response to low or high frequency signals are located in different regions of a 3D feature space, suggesting that information about the frequency of the input signal, can be recovered from the analysis of the emitted optical spikes [3].





Figure 1. Left: Optical spikes emitted by an excitable laser (nanosecond time scale); right: neuronal spikes simulated with the FitzHugh Nagumo model (millisecond time scale).
Acknowledgements
Ministerio de Ciencia, Innovación y Universidades (No. PID2021-123994NB-C21), Institució Catalana de Recerca i Estudis Avançats (ICREA Academia), Agencia de Gestió d’Ajuts Universitaris i de Recerca (AGAUR, No. 2021 SGR 00606).
References
[1] Bandt C, Pompe B. (2002). Permutation entropy: a natural complexity measure for time series. Phys. Rev. Lett., 88, 174102.https://doi.org/10.1103/PhysRevLett.88.174102


[2] Masoliver M, Masoller C (2020). Neuronal coupling benefits the encoding of weak periodic signals in symbolic spike patterns. Commun. Nonlinear Sci. Numer. Simulat. 88, 105023.https://doi.org/10.1016/j.cnsns.2019.105023


[3] Boaretto BRR, Macau EEN, Masoller C (2024). Characterizing the spike timing of a chaotic laser by using ordinal analysis and machine learning, Chaos 34, 043108.https://doi.org/10.1063/5.0193967


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P189: Elementary Dynamics of Neural Microcircuits
Monday July 7, 2025 16:20 - 18:20 CEST
P189 Elementary Dynamics of Neural Microcircuits

Stefano Masserini*1,2,3, Richard Kempter1,2,3

1 Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
2Bernstein Center for Computational Neuroscience, Berlin, Germany
3Charité-Universitätsmedizin Berlin, Einstein Center for Neurosciences, Berlin, Germany

*Email: stefanomasse@gmail.com

Introduction

Cell type diversity is a major direction in which systems neuroscience has expanded in the last decade, as networks of excitatory (E) and inhibitory (I) neurons enriched with specific neuronal populations with their own distinct role in the network dynamics. These advances have mostly been driven by new experimental techniques, often inspiring circuit-specific modeling, even when stark similarities across cortical areas would have allowed describing the dynamics of these microcircuits with a more general mathematical language. Steps toward a general description have been taken by using linear approximations to understand how connectivity shapes responses to perturbations from within or outside the network [1,2].

Methods
In this work, we expand on these findings by studying microcircuit dynamics in the simplest nonlinear model, the threshold-linear network (TLN), and generalize insights originally obtained for all-inhibitory TLNs [3]. This model greatly extends the dynamical repertoire of purely linear networks, by allowing for oscillations and multistability. On the other hand, it retains the simplicity of linear models, since the conditions for each nonlinear regime can be computed in closed form and intuitively interpreted in terms of input and connectivity requirements. With this tool, we not only map previously unrelated systems neuroscience hypotheses to a common reference space, but also gain new insights into specific circuits across the brain.
Results
Namely, we compare balancing strategies in inhibition-stabilized E-I networks and discuss different types of bistability in hippocampal E-I-I networks. We then examine the conditions for gamma oscillations in the canonical circuit (Fig. 1A), providing a mechanistic explanation for the opposing effects of PV and SOM interneurons [4]. In E-E-I circuits, we show that connectivity determines three fundamentally different types of assembly interactions, while in E-E-I-I circuits we find that balanced clustering prevents coordinated inputs to one E-I unit from exerting lateral inhibition (Fig. 1B), while opponent clustering can induce competition even between strongly coupled E assemblies, resulting in different bistable configurations (Fig. 1C).
Discussion
While TLNs have so far not been regarded as a standard rate model for neural populations, these applications show that they can provide interpretable conditions even for the emergence of complex dynamical landscapes. These conditions should be taken into account by future modeling work on neural microcircuits, at least as a benchmark to determine whether additional complexity is necessary to explain their dynamics of interest. The simple structure of this model is also amenable to the addition of variables representing synaptic plasticity or slow adaptive currents. TLNs can also be directly compared to spiking networks, for example because they are the first-order mean-field limit for networks of Poisson neurons [5].




Figure 1. (A) Canonical circuit. (Aii-iii) Oscillation coherence. (Aiv) Effects of impairing SOM or PV (matching shading). (B-C) EEII network. (Bii-iii) Firing modulation wrt bottom left point. (Biv) Modulation example (shaded area). (Cii) Dynamical landscape, smaller regions are EII or EEI bistability. (Ciii) Lateral inhibition by either inputs to E1 or I1. (Civ) EI bistability. Input to I1 induces switch.
Acknowledgements
The authors thank Gaspar Cano, Carina Curto, Atilla Kelemen, John Rinzel, Archili Sakevarashvili, and Tilo Schwalger for insightful discussions about this study. Founding source: German Research Foundation, project 327654276--SFB 1315.
References
[1]https://doi.org/10.1101/2020.10.13.336727
[2] https://doi.org/10.1073/pnas.231104012
[3] https://doi.org/10.48550/arXiv.1804.00794
[4]https://doi.org/10.1038/nn.4562
[5] https://doi.org/10.48550/arXiv.2412.16111
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P190: Simulating healthy and diseased behaviour using spiking neurons
Monday July 7, 2025 16:20 - 18:20 CEST
P190 Simulating healthy and diseased behaviour using spiking neurons

Mavritsaki E.1,3, Klein, J.1, Porwal, N.1, Allen, H.A.2, Bowman, H.3, Amanatidou, V.4, Cook, A.1, Clibbens, J.1and Lintern, M.1

1College of Psychology, Birmingham City University, Birmingham, UK
2School of Psychology, University of Nottingham, Nottingham, UK
3School of Psychology, University of Birmingham, Birmingham, UK
4Worcestershire Health and Care Trust, UK

*Email: eirini.mavritsaki@bcu.ac.uk

Introduction

Spiking neural networks have proven highly effective in simulating both healthy and diseased neural behaviour. They offer researchers the opportunity to simultaneously study behaviour and understand its relationship with the underlying biological properties of the system. The approach is particularly valuable as these networks accurately mimic real neuronal communication, providing a more biologically accurate model compared to traditional methods, allowing researchers to analyse time-dependent patterns and providing deeper insights into neural dynamics and cognitive processes. Consequently, spiking neural networks have become an invaluable tool for advancing brain studies and neurological research. In this work, we present two studies utilizing spiking neural networks extending our previous work using the spiking Search over Time and Space (sSoTS) model.
Methods
The sSoTS model is a spiking neural model incorporating a fast excitatory AMPA recurrent current, a slow excitatory NMDA current, an inhibitory GABA current, and aIAHPslow[Ca+]activatedK+current.We built upon our previous research in visual search (Mavritsaki et al., 2011; Mavritsaki & Humphreys, 2016) to simulate behavioural findings in our lab in attention between adults, children and children that score high in Conners 3AI, testing ADHD. We also build upon our previous Alzheimer’s work (Mavritsaki et al., 2019) to simulate N400 and P600 components in the semantic category judgment task (Olichney et al., 2000), which has been used to track ERP changes in patients progressing through MCI to mild AD. Please see figure 1.

Results
Results from our visual search paradigm demonstrate that reducing coupling between neurons in the model successfully simulates the differences between adults and children. Furthermore, our findings suggest that temporal binding between feature items may be a key mechanism underlying differences observed between healthy children and those scoring high on the Conners 3AI test, as reducing this parameter in the model reproduced the observed differences. In our Alzheimer's work, we simulated the biomarkers found with the N400 and P600 ERP components by modelling the semantic category judgment task and modifying parameters related to pathological ionic, neurotransmitter, and atrophy modulations.

Discussion
Results from both studies demonstrate the importance of using spiking neural networks in computational modelling, as they provide valuable insights into brain functions, link different methodologies, and help understand changes that occur in diseased brains. Our Alzheimer's work shows that the disease's pathology can be measured through N400 and P600 congruency effects, thus validating ERPs as biomarkers for AD. Our visual search and ADHD work identifies the crucial role of binding in visual search and provides valuable insights into the ADHD condition that can support updates to the diagnostic criteria for ADHD.





Figure 1. The top part of the figure illustrates the key neuronal properties of the spiking neural network model. The bottom left panel shows the network connectivity implemented to simulate the semantic category judgment task in our Alzheimer's disease study, while the bottom right panel depicts the neural network configuration used to simulate visual search task performance in our ADHD behavioural study.
Acknowledgements
The computations described in this paper were performed using the University of Birmingham's BEAR Cloud service, which provides flexible resource for intensive computational work to the University's research community. Seehttp://www.birmingham.ac.uk/bearfor more details.
References
Mavritsaki, E., Bowman, H., & Su, L. (2019). Springer Intern.Publishing. https://doi.org/10.1007/978-3-030-18830-6_11
Mavritsaki, E., Heinke, D., Allen, H., Deco, G., & Humphreys, G. W. (2011). Bridging the Gap Between Physiology and Behavior:Psych.l Review,118(1), 3–41. https://doi.org/10.1037/A0021868
Mavritsaki, E., & Humphreys, G. (2016).Journal of Cognitive Neuroscience,28(10). https://doi.org/10.1162/jocn_a_00984

Olichney, J. M., Van Petten, C., Paller, K. A., Salmon, D. P., Iragui, V. J., & Kutas, M. (2000).Brain,123(9), 1948–1963. https://doi.org/10.1093/brain/123.9.1948
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P191: "Brain Fluidity as a Biomarker for Alzheimer's Disease: Linking Network Dynamics to Clinical Disability Prediction"
Monday July 7, 2025 16:20 - 18:20 CEST
P191 "Brain Fluidity as a Biomarker for Alzheimer's Disease: Linking Network Dynamics to Clinical Disability Prediction"

Camille Mazzara*1,2,3, Gian Marco Duma4, Giuditta Gambino5, Giuseppe Giglia5, Michele Migliore2, Pierpaolo Sorrentino3,6,7
1.Department of Promoting Health, Maternal-Infant. Excellence and Internal and Specialized Medicine (PROMISE) G. D’Alessandro, University of Palermo, Palermo, Italy.
2.Institute of Biophysics, National Research Council, Palermo, Italy.
3.Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France.
4.IRCCS E. Medea Scientific Institute, Conegliano, Treviso, Italy
5.Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy.
6.Institute of Applied Sciences and Intelligent Systems, National Research Council, Pozzuoli, Italy
7.University of Sassari, Department of Biomedical Sciences, Viale San Pietro, 07100, Sassari, Italy.
*email: camille.mazzara@ibf.cnr.it

Introduction

Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by progressive cognitive decline and large-scale network dysfunction[1]. While amyloid-beta (Aβ) and tau pathology are well-documented [2,3], the network-level mechanisms linking neuronal degeneration to cognitive impairment remain poorly understood. Traditional functional connectivity (FC) analyses provide static representations of brain networks, failing to capture their intrinsic dynamics[4]. We proposebrain fluidity, a metric that quantify network flexibility, as a potential biomarker reflecting AD-related disruptions in brain dynamics.


Methods
After preprocessing and source reconstruction, we analyzed resting-state EEG data from 28 AD patients and 29 healthy controls. Brain fluidity was quantified measuring the variability of functional connectivity over time, reflecting how interregional synchronization evolves. We assessed its relationship with established AD biomarkers, including cerebrospinal fluid (CSF) levels of Aβ42, phosphorylated tau (p-tau), and total tau (t-tau). Additionally, we examined associations between brain fluidity and cognitive performance (Mini-Mental State Examination (MMSE)). Statistical analyses included between-group comparisons and regression models to determine the predictive value of fluidity in tracking disease severity.
Results
Fluidity analysis across frequency bands (theta, alpha, beta, gamma) revealed significant differences in AD patients (Fig.1a). In the theta band (4–8 Hz), fluidity was higher in AD compared to controls, while in the beta band (14–30 Hz), fluidity was lower. Correlation analyses showed no significant associations between theta fluidity and clinical measures. However, beta fluidity negatively correlated with tTau and pTau (Fig.1c), suggesting a link to neurodegeneration. Notably, no significant associations were found between fluidity and Aβ levels. Using a multilinear regression model we also found that adding fluidity calculated in the beta band significantly improved the predictive power for clinical disability.
Discussion
This results could imply that changes in the ability of the brain to flexibly switch between different dynamic states are associated with neurodegenerative processes, specifically tau-related damage. Reduced brain fluidity in beta may reflect underlying neurodegenerative processes, providing insights into the functional consequences of neuronal loss. Given its sensitivity to AD-related changes, brain fluidity may serve as a promising biomarker for tracking disease progression and evaluating treatment efficacy in clinical settings.





Figure 1. Fig.1 a) Fluidity for each frequency band in AD and control groups. b) dFC matrices averaged across AD (left) and control (right), computed in theta (top) and beta (bottom). c) Correlation between beta-band fluidity and tTau (left), pTau (center), Aβ (right), with significant links to tTau (p = 0.03) and pTau (p = 0.01), but not Aβ42.
Acknowledgements

References
Bibliography
1.https://doi.org/10.1016/j.lfs.2020.117996
2.https://doi.org/10.1590/S1980-57642009DN30300003

3.https://doi.org/10.7554/eLife.98920.1
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P192: Identifying Cell-Type-Specific Alterations Underlying Schizophrenia-Related EEG Deficits Using a Multiscale Model of Auditory Thalamocortical Circuits
Monday July 7, 2025 16:20 - 18:20 CEST
P192 Identifying Cell-Type-Specific Alterations Underlying Schizophrenia-Related EEG Deficits Using a Multiscale Model of Auditory Thalamocortical Circuits

Scott McElroy*1,2, James Chen1,2, Nikita Novikov1,2,3, Pablo Fernández-López4, Carmen Paz Suárez-Araújo4, Christoph Metzner5, Daniel Javitt3, Sam Neymotin3, Salvador Dura-Bernal1,2,3
1Global Center for AI, Society and Mental Health, SUNY Downstate Health Sciences University, Brooklyn, United States of America
2Department of Physiology and Pharmacology,SUNY Downstate Health Sciences University, Brooklyn, United States of America
3Center for Biomedical Imaging & Neuromodulation, Nathan Kline Institute, Orangeburg, United States of America
4Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Gran Canaria, España
5Technische Universität Berlin, Berlin, Germany


*Email: scott.mcelroy@downstate.edu
Introduction
Schizophrenia is associated with cognitive deficits, including disruptions in sensory processing. Electroencephalography (EEG) studies have identified abnormalities in event-related potentials and cortical oscillations, particularly within the auditory system. Among the most well-established EEG biomarkers are the reduced 40 Hz Auditory Steady-State Response (ASSR) and impaired mismatch negativity (MMN). Understanding the neural mechanisms underlying these EEG deficits is critical for linking molecular and circuit-level alterations to cognitive dysfunctions in schizophrenia.Methods
We extended our computational model of auditory thalamocortical circuits to investigate the circuit-level mechanisms underlying schizophrenia-related EEG abnormalities1. The model simulates a cortical column with over 12,000 neurons and 30 million synapses, incorporating experimentally derived neuron densities, laminar organization, morphology, biophysics, and connectivity across multiple scales. Auditory inputs to the thalamus were modeled using a phenomenological cochlear representation, allowing for the reproduction of realistic physiological responses. Additionally, a more systematic approach to providing background network activity was implemented using Ornstein-Uhlenbeck (OU) processes to model time-varying, statistically independent somatic conductance injections.Results & Discussion
Our refinements enhance the physiological fidelity of EEG simulations, enabling improved replication of schizophrenia-related biomarkers. The integration of OU-modeled background activity ensures smoother, correlated variations in network input, leading to more biologically realistic fluctuations in neuronal dynamics. The OU process's mean and standard deviation are expressed as input conductance percentages for each cell type, linking them to intrinsic cellular properties. Additionally, we are developing an adaptive algorithm to dynamically calibrate population-specific OU parameters, ensuring model flexibility as it evolves. By incorporating experimentally observed molecular and genetic alterations, our model provides deeper insights into the neural basis of auditory processing deficits in schizophrenia and strengthens the link between cellular dysfunctions and EEG biomarkers.






Acknowledgements
This work is supported by NIBIB U24EB028998
References
1.10.1016/j.celrep.2023.113378
Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P193: Brief Neurofeedback Training Increases Midline Alpha Activity in Default Mode Network
Monday July 7, 2025 16:20 - 18:20 CEST
P193 Brief Neurofeedback Training Increases Midline Alpha Activity in Default Mode Network

Matthew McGowan1, Alison Crilly1, Rongxiang Tang2,Yi-Yuan Tang*1

1College of Health Solutions, Arizona State University, Tempe, United States
2Department of Psychological & Brain Sciences, Texas A&M University, College Station, United States


*Email: yiyuan@asu.edu

Introduction

EEG Neurofeedback trains individuals to voluntarily modulate brainwave activity, promoting cognitive, emotional, and behavioral improvements by modulating large-scale brain networks and inducing neural plasticity [1]. While traditional neurofeedback protocols often require 20–40 sessions over several weeks or months, this study investigated whether a brief neurofeedback intervention—10 sessions over 2 weeks—could achieve similar neural regulation, particularly within the Default Mode Network (DMN).
Methods
To maximize the effects ofneurofeedback, we selected a protocol designed to reward frontal midline Theta (4–8 Hz) to enhance executive function and emotional balance, and central sensorimotor Rhythm (SMR, 12–15 Hz) to promote focus and calmness, while inhibiting posterior midline Beta (16–35 Hz) to reduce stress and improve sensory clarity. This protocol aims to enhance self-regulation, resilience, and overall brain efficiency, thereby facilitating neurofeedback learning and benefits.
Twenty participants with mild alcohol, tobacco, and/or cannabis use were recruited, and 19 provided usable data. Participants were instructed to complete each neurofeedback session with minimal effort to achieve the training goals. The NASA Task Load Index (NASA-TLX), a subjective workload assessment tool, was administered to 12 participants (11 with usable data) before and after 10 neurofeedback sessions. EEG recordings were taken before (T1) and after (T2) the training. The data were analyzed using Quantitative Electroencephalographic (QEEG) analysis, and paired t-tests were conducted to evaluate changes in brainwave patterns and neurofeedback workload (effort and mental demand).
Results
Quantitative EEG analysis revealed significant increases in frontal and posterior midline Alpha relative power (p= 0.011 andp= 0.013, respectively), alongside a significant decrease in the Theta/Alpha ratio (p= 0.047) and a significant increase in the Alpha/Beta ratio (p= 0.035). However, after neurofeedback, no significance in Theta and SMR power was detected, although a marginally significant reduction in Beta absolute power was found (p=0.074). Subjective workload assessments (NASA-TLX) indicated significant reductions in effort (p= 0.001) and mental demand (p= 0.0008).
Discussion
These findings suggest that brief neurofeedback training can enhance midline Alpha activity and modulate key neural frequency ratios, potentially improving DMN functional connectivity and promoting relaxation, self-reflection, and emotional regulation [2,3]. While preliminary, these results highlight the neuroplastic potential of short-term neurofeedback training, with implications for addressing DMN dysregulation in conditions such as substance use disorders, anxiety, and depression. Further research with larger samples is needed to understand the mechanisms and broader implications of these findings.




Acknowledgements
This work is supported by the ONR N000142412270 and NIH R33 AT010138.
References
1.Bowman, A. D., et al.(2017). Relationship between alpha rhythm and the default mode network: An EEG-fMRI study. J Clin Neurophysiol. 34(6), 527-533. https://doi.org/10.1097/WNP.0000000000000411
2.Tang, Y.Y.,&Posner, M. I. (2009). Attention training and attention state training. Trends Cogn Sci.13(5), 222–227.https://doi.org/10.1016/j.tics.2009.01.009

3. Tang, Y.Y., Tang, R,Posner, M. I.,&Gross, J. J. (2022). Effortless training of attention and self-control: mechanisms and applications. Trends Cogn Sci. 26(7), 567-577. https://doi.org/10.1016/j.tics.2022.04.006.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P194: Strategies for Neurofeedback Success: Exploring the Relationship Between Alpha Power and Mental Effort
Monday July 7, 2025 16:20 - 18:20 CEST
P194 Strategies for Neurofeedback Success: Exploring the Relationship Between Alpha Power and Mental Effort
Matthew McGowan1, Alison Crilly1, Rongxiang Tang2, Yi-Yuan Tang*1
1College of Health Solutions, Arizona State University, Tempe, United States2 Department of Psychological & Brain Sciences, Texas A&M University, College Station, United States
*Email: yiyuan@asu.edu
Introduction

EEG neurofeedback is a non-invasive neuromodulation technique that enables individuals to regulate brain activity through real-time feedback, promoting cognitive enhancement, emotional regulation, and adaptive brain plasticity. However, it remains unknown which regulation strategies lead to successful neurofeedback. Based on previous research, we hypothesize that effortless strategies (less mental demand and effort) produce neurofeedback success indexed by increased alpha activity in the default mode network [1,2].


Methods
To maximize the effects of neurofeedback, we selected a protocol designed to reward frontal midline Theta (4–8 Hz) to enhance executive function and emotional balance, and central sensorimotor Rhythm (SMR, 12–15 Hz) to promote focus and calmness, while inhibiting posterior midline Beta (16–35 Hz) to reduce stress and improve sensory clarity. This protocol was implemented with two eyes closed (soft music) and one with eyes open (nature scene) sessions. This protocol aims to enhance self-regulation, resilience, and overall brain efficiency, facilitating neurofeedback learning and benefits. This study examined the effects of 10 consecutive neurofeedback sessions reinforcing midline Theta and SMR while inhibiting high Beta in 12 participants(11 with usable data). Behavioral assessments included the NASA Task Load Index (NASA-TLX) and the Rating Scale for Mental Effort (RSME) to evaluate perceived mental workload alongside post-session interviews documenting self-regulation strategies.
Results
RSME results showed significant decreases in mental effort for all three protocols: p= 0.051, p= 0.015, and p=0.011, respectively (10 usable data). We also detected significant reductions in mental demand and effort on the NASA-TLX (p=0.0008, p=0.001 respectively). A negative correlation between posterior parietal alpha power and effort (r=-0.643, p=0.0327) was found, suggesting that higher alpha activity was associated with reduced cognitive workload. Correlation analysis indicated that participants with greater increases in posterior alpha power exhibited smaller reductions in perceived external demand (r=0.650, p=0.030), suggesting that neurofeedback training altered brain activity and reduced effort despite the persistence of task-related demand. Additionally, significant increases in frontal and posterior midline alpha power (p=0.011, p=0.013) suggested enhanced default mode network activity.
Discussion
These findings suggest that neurofeedback training promotes neural efficiency and cognitive ease, reinforcing the effectiveness of an effortless strategy for learning self-regulation of brain activity. By facilitating effortless engagement, neurofeedback may optimize neural adaptation, enhancing brain plasticity, cognitive efficiency, and self-regulation.



Acknowledgements
This work is supported by the ONR N000142412270 and NIH R33 AT010138.
References
1.Tang, Y.Y.,&Posner, M. I. (2009). Attention training and attention state training.Trends
Cogn Sci.13(5), 222–227.https://doi.org/10.1016/j.tics.2009.01.009

2. Tang, Y.Y., Tang, R,Posner, M. I.,&Gross, J. J. (2022). Effortless training of
attention and self-control: mechanisms and applications.Trends Cogn Sci.26(7),
567-577.https://doi.org/10.1016/j.tics.2022.04.006.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P195: Electrical coupling in thalamocortical networks cumulatively reduces cortical correlation to sensory inputs
Monday July 7, 2025 16:20 - 18:20 CEST
P195 Electrical coupling in thalamocortical networks cumulatively reduces cortical correlation to sensory inputs

Austin J. Mendoza1, Julie S. Haas*1

1Department of Biological Sciences, Lehigh University, Bethlehem PA


*Email: julie.haas@lehigh.edu

Introduction

Thalamocortical (TC) cells relay sensory information to the cortex, as well as driving their own feedback inhibition through their excitation of the thalamic reticular nucleus (TRN). The inhibitory cells of the TRN are extensively coupled through electrical synapses. Although electrical synapses are most often noted for their roles in synchronizing rhythmic forms of neuronal activity, they are also positioned to modulate responses to transient information flow across and throughout the brain, though this effect is seldom explored. Here we sought to understand how electrical synapses embedded within a network of TRN neurons regulate the processing of ongoing sensory inputs during relay from thalamus to cortex.
Methods
We utilized Hodgkin-Huxley point models to construct a network of a 9 TC and 9 TRN cells, with one cortical output neuron summing the TC activity. Pairs of TC and TRN cells were reciprocally coupled by chemical synapses. The TRN cells were each electrically coupled to two neighboring cells, forming a ring topology. Each TC cell received an exponential current input in sequence, with intervals between inputs varying from 10 to 50 ms across simulations. This architecture and sequence of inputs allowed us to assess the functional radius of an electrical synapse. We compared the cumulative effects of each additional TRN electrical synapse on modulating responses of the TRN and TC cells and the cortical output.
Results
Increasing coupling strength between TRN cells modulated TRN responses by decreasing spike latency and increasing duration of TRN spike trains. Effects were strongest for smaller intervals between inputs, and cumulative with additional synapses. In TC cells, we also observed changes in latency and duration of responses and decorrelation of the responses from the inputs. These effects were strongest for larger intervals between inputs and also increased with coupling strength. Coupling within TRN modulated cortical integration of TC inputs by increasing spike rate but reducing spike correlation to the input sequence that was presented to the TC layer. These effects were robust to additive noise.
Discussion
Here we show that TRN electrical synapses exert powerful influence on thalamocortical relay, unexpectedly reducing cortical output correlation to inputs presented to thalamus. We noted that effects of electrical synapses were cumulative. Coupling between pairs alone did not predict the effects seen in a network context, as coupling coefficient measured across multiple neurons drops to unmeasurable levels. These results show that multi-synaptic influences of electrically coupled cells should be included in more complex and realistic network topologies.




Acknowledgements

References

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P196: Kernel-based LFP estimation in detailed large-scale spiking network model of visual cortex
Monday July 7, 2025 16:20 - 18:20 CEST
P196 Kernel-based LFP estimation in detailed large-scale spiking network model of visual cortex



Nicolò Meneghetti1,2,*, Atle E. Rimehaug3, Gaute T. Einevoll4,5, Alberto Mazzoni1,2, Torbjørn V. Ness4


1The Biorobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
2Department of Excellence for Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
3Department of Informatics, University of Oslo, Oslo, Norway
4Department of Physics, Norwegian University of Life Sciences, Ås, Norway
5Department of Physics, University of Oslo, Oslo, Norway


*Email: nicolo.meneghetti@santannapisa.it


Introduction

Large-scale neuronal networks are fundamental tools in computational neuroscience. A key challenge in this domain is simulating measurable signals like local field potentials (LFPs), which bridge the gap between in silico model predictions and experimental data. Simulating LFPs in large-scale models, however, requires biologically detailed multicompartmental (MC) neuron models, which impose significant computational demands. To address this, multiple simplified approaches have been developed. In our work [1] we extended a kernel-based method to enable accurate LFP estimation in a state-of-the-art MC model of the mouse primary visual cortex (V1) from the Allen Institute [2], [3] while significantly reducing computational costs.

Methods
This V1 model features extensive biological detail, with over 50,000 MC neurons across six cortical layers[2], as well as experimentally recorded afferent inputs from both thalamic and lateromedial visual areas[3].
Instead of direct MC simulations, our method estimates the LFP by convolving population firing rates with precomputed spatiotemporal kernels (Fig. 1A), which represent the average postsynaptic LFP response to a presynaptic spike (see e.g., Fig.1B). This drastically reduced computational cost while maintaining estimation accuracy.
Results
The kernel method accurately estimated LFPs in both superficial (Fig. 1C) and deep layers (Fig. 1D). By treating LFPs as the sum of convolutions of neuronal firing rates and LFP-kernels, the method also enabled disentangling the contributions of different neuronal populations to the overall LFP. We found that V1 LFPs are primarily driven by external inputs, with thalamic afferents dominating in layer 4 (Fig. 1F) and lateromedial feedback influencing L2/3 layers (Fig. 1E). In contrast, local synaptic activity contributed minimally, challenging the conventional view that PV neurons are primary LFP drivers [4]. In fact, we showed that PV apparent influence on LFP reflects their correlation with external inputs rather than direct contribution.
Discussion
Our findings establish the kernel-based method as a robust and efficient tool for LFP estimation in large-scale network models. By significantly reducing computational costs, this approach makes detailed LFP simulations more practical while also providing insights into cortical LFP generation. Our results highlight the predominant role of external synaptic inputs, while challenging the conventional view that local network activity, including inhibitory interneurons, is a primary LFP driver. This methodology provides a useful framework for studying sensory processing and network dynamics in large-scale models, helping to clarify the contributions of different neuronal populations to cortical LFPs.




Figure 1. (A) Schematic of the kernel-based LFP estimation. (B) Set of kernels for computing L2/3 LFPs for different presynaptic families. (C) L2/3 LFPs computed with both MC simulations (red) and kernel convolution (black). (D) Same as C, for layer 4. (E) Cross-R² matrix between the total L2/3 LFPs and the LFP generated by the synaptic activity of each population in the model. (F) Same as C, for layer 4.
Acknowledgements
This work was supported by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MAD-2022-12376927 (“The etiopathological basis of gait derangement in Parkinson’s disease: decoding locomotor network dynamics”).
References
[1]https://doi.org/10.1101/2024.11.29.626029
[2]https://doi.org/10.1016/j.neuron.2020.01.040
[3]https://doi.org/10.7554/eLife.87169
[4]https://doi.org/10.1038/srep40211
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P197: Resilience of local microcircuitry firing dynamics to selective connectivity degeneration
Monday July 7, 2025 16:20 - 18:20 CEST
P197 Resilience of local microcircuitry firing dynamics to selective connectivity degeneration

Simachew Mengiste*1, Ad Aertsen2, Demian Battaglia1, Arvind Kumar3

1Functional System Dynamics / LNCA UMR 7364, University of Strasbourg, France
2BCCN / University of Freiburg, Germany
3KTH Royal Institute of Technology, Stockholm, Sweden

*Email: mengiste@unistra.fr


Introduction
Local microcircuit connectivity within local cortical microcircuits shapes spiking dynamics, influencing firing rate, synchrony, and regularity (thus information bandwidth). Often modeled as random and sparse (Erdös-Rényi, ER) or with small-world or scale-free properties, connectivity derived from detailed connectomic reconstructions (Egger et al., 2014) display dense cell clusters, diverging from mere randomness.
Neurodegenerative diseases (e.g., Alzheimer’s) induce neuronal and synaptic loss, disrupting dynamics. We systematically examine how pruning affects microcircuits with different topologies, revealing that resilience strongly depends on connectivity, with the "real connectome" being particularly robust.


Methods
We studied three random network topologies—Erdös-Rényi (ER), small-world (SW), and scale-free (SF)—plus a fourth based on real connectome (RC) reconstructions. Neurons were modeled as leaky integrate-and-fire units, with excitatory and inhibitory inputs shaping membrane potential dynamics.

Network degeneration was simulated via progressive pruning of synapses and neurons, using random or targeted sequences based on node degree or centrality. We analyzed firing rate, correlations, and spiking variability (coefficient of variation), alongside net synaptic currents received on average. Structural changes were assessed via graph metrics. We then systematically probed how firing dynamics evolved in the four ensembles along neurodegeneration.

Results
Using different network topologies and neurodegenerative strategies, we found that activity states were largely independent of topology across the different ensembles. Degeneration induced similar firing rate and synchrony variations across neurodegenerative schemes. We hypothesized that E-I balance changes, rather than topology, drove these dynamics. The effective synaptic weight (ESW) best predicted network activity, explaining firing rate, variability, and synchrony—except pairwise correlation, which depended on shared presynaptic neighbors and connection density. The real connectome (RC) followed similar ESW dependencies but exhibited broader stability ranges for all different firing parameters.

Discussion
While most neurodegeneration models focus on long-range connectivity changes, local microcircuits are also affected, altering synchrony and information processing. We find that local circuit dynamics are indeed disrupted, but less dependent on precise connectivity than expected. Instead, the effective synaptic weight (ESW) emerges as a stronger predictor of network behavior, making it a key measure for assessing function in both healthy and diseased states. The anomalous stability of firing parameters in networks with realistic connectivity suggests that microcircuit properties may have evolved to enhance functional resilience.




Acknowledgements
ANR PEPR Santé Numérique "BHT - Brain Health Trajectories"
ReferencesEgger, R., Dercksen, V. J., Udvary, D., Hege, H.-C., & Oberlaender, M. (2014). Generation of dense statistical connectomes from sparse morphological data.Frontiers in Neuroanatomy,8, 129. https://doi.org/10.3389/fnana.2014.00129


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P198: Binary Brains: Excitable Dynamics Simplify Neural Connectomes
Monday July 7, 2025 16:20 - 18:20 CEST
P198 Binary Brains: Excitable Dynamics Simplify Neural Connectomes

Arnaud Messé¹, Marc-Thorsten Hütt², Claus C. Hilgetag¹,³*
¹ Institute of Computational Neuroscience, Hamburg Center of Neuroscience, University Medical Center Eppendorf, Hamburg, Germany
² Computational Systems Biology, School of Science, Constructor University, Bremen, Germany
³ Department of Health Sciences, Boston University, Boston, MA, USA
--
*Email: c.hilgetag@uke.de


Introduction
Neural connectomes, representing the structural backbone of brain dynamics, are traditionally analyzed as weighted networks. However, the computational cost and methodological challenges of weighted representations hinder their widespread use. Here, we demonstrate that excitable dynamics—a common mechanism in neural and artificial networks—enable a thresholding approach that renders weighted and binary networks functionally equivalent. This finding simplifies network analyses and supports efficient artificial neural network (ANN) design by drastically reducing memory and computational demands.
Methods
We examined excitable network dynamics using a cellular automaton-based excitable model (SER model) and the FitzHugh-Nagumo model. By mapping the local excitation threshold onto global network weights, we identified a threshold at which binarized networks produce activity patterns statistically indistinguishable from those in weighted networks. Simulations were performed on synthetic networks, empirical structural brain connectivity data (MRI-derived), and artificial neural networks trained on the MNIST dataset. Computational efficiency was assessed in terms of memory usage and execution time.
Results & Discussion
Our findings [1] show that, under appropriate thresholding, binarized networks accurately reproduce coactivation patterns and functional connectivity observed in weighted brain networks. This effect holds across diverse network topologies and weight distributions, particularly for log-normal weight distributions found in empirical data. Computationally, binarized networks require significantly less memory and reduce processing times by orders of magnitude. These findings not only simplify empirical network analyses in neuroscience but also suggest a general principle for optimizing computational models in various domains, including machine learning, complex systems, and bio-inspired AI. Particularly in ANNs, thresholding maintains classification accuracy while drastically lowering the number of parameters, making binary networks a promising approach for efficient AI design.



Acknowledgements
The research was supported by the Deutsche Forschungsgemeinschaft (DFG) - SFB 936 - 178316478 - A1 & Z3, SPP2041 - 313856816 - HI1286/7-1, TRR 169 - 261402652 - A2, and the EU Horizon 2020 Framework Programme (HBP SGA2 & SGA3).
References
[1]https://doi.org/10.1101/2024.06.23.600265

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P199: Efficient slope detection with regular spiking and bursting point neuron models
Monday July 7, 2025 16:20 - 18:20 CEST
P199 Efficient slope detection with regular spiking and bursting point neuron models

Rebecca Miko*1, Marcus M. Scheunemann2,3, Volker Steuber1, Michael Schmuker1

1Biocomputation Research group, University of Hertfordshire, Hatfield, UK
2Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK
3Autonomy Department, Dexory, London, United Kingdom

*Email: rebeccamiko@outlook.com

Introduction

In real-world environments, odour stimuli exhibit a complex temporal structure due to turbulent gas dispersion, resulting in intermittent and sparse signals. These turbulence-induced fluctuations can be rapid yet contain valuable information crucial for locating odour sources. This ability is essential for both biological agents in foraging and mate-seeking behaviours, as well as robotic gas sensing in environmental and industrial monitoring. However, omnipresent turbulence destroys concentration gradients. Research suggests that the temporal dynamics of odour signals encode key information about the olfactory scene [1, 2].

Methods
Using the Izhikevich model [3], we develop neurons that spike at the rising edges (Fig. 1a) of naturalistic input signals across varying frequencies. We then compare these neurons to a two-compartmental model [4], which predominantly fires bursts at positive slopes in naturalistic inputs. By analysing the spiking behaviour of both models, we assessed whether bursting mechanisms are necessary for detecting odour signal dynamics
Results
Our findings indicate that a regular spiking neuron can effectively encode the slopes of input signals through discrete spike events, and that these detectors do not need to have the bursting mechanism (Fig. 1b). In contrast, the two-compartmental model [4] predominantly fires bursts in response to rising signal slopes, while the Izhikevich model generates single spikes at these transitions while maintaining computational efficiency. This demonstrates that a simple spiking neuron can capture key temporal features of odour signals without complex bursting dynamics.
Discussion
These results suggest that detecting odour signal slopes does not require burst firing. Instead, regular spiking neurons can efficiently encode temporal features of turbulent odour signals. Given the computational efficiency of the Izhikevich point neuron model, our findings offer potential applications in robotic gas navigation, where rapid and accurate data processing is crucial. By leveraging simple neural mechanisms, future research can explore bio-inspired gas-sensing systems for environmental and industrial monitoring.




Figure 1. Top trace: Gaussian white noise input nA (5 Hz; µ = .006; σ = .015). Bottom trace: membrane potential mV response. Top panel: neuron has parameters {a:0.01,b:0.2,c:- 35,d:5.0}. Asterisks mark burst onsets (grey dotted lines added for clarity). Bursts are defined by ISI ≤ 10 ms. No single spikes were produced. Bottom panel: neuron has parameters {a:0.01,b:0.2,c:-50,d:8.0}. Asterisks mark spikes.
Acknowledgements
Funding received from the NSF/MRC NeuroNex Odor2Action programme 274 (NSF #2014217, MRC #MR/T046759/1).
References
[1] Schmuker, M., Bahr, V., & Huerta, R. (2016). Exploiting plume structure to decode gas source distance using metal-oxide gas sensors. Sensors and Actuators B: Chemical, 235, 636–646
[2] Ackels, T., Erskine, A., Dasgupta, D., et al. (2021). Fast odour dynamics are encoded in the olfactory system and guide behaviour. Nature, 593(7859), 558–563
[3] Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569–1572

[4] Kepecs, A., Wang, X. J., & Lisman, J. (2002). Bursting neurons signal input slope. Journal of Neuroscience, 22(20), 9053–9062
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P200: Modeling the response of cortical cell populations to transcranial magnetic stimulation
Monday July 7, 2025 16:20 - 18:20 CEST
P200 Modeling the response of cortical cell populations to transcranial magnetic stimulation

Aaron Miller1, Konstantin Weise1,2,Thomas R. Knösche*1,3


1Max Planck Institute for Human Cogntitive and Brain Sciences, Leipzig, Germany



2Leipzig University of Applied Sciences, Germany

3Technical University Ilmenau, Germany


*email: knoesche@cbs.mpg.de
Introduction

The response of cortical neurons to TMS depends on the locally induced electric field magnitude and direction as well as on the physiological, biophysical, and microstructural properties of the involved cells. Here, we provide a modeling framework to integrate standard neural population modeling with numerically estimated TMS induced electric fields, mediated by detailed information on cell morphology and physiology. We exemplify this framework for the stimulation of primary motor cortex (M1), giving rise to observable electromyographic recordings in muscles (motor evoked potentials – MEP) as well as fast activity volleys in the EEG (DI-waves).


Methods
The model comprises pairs of pre/postsynaptic neural populations and their generation of short-latency (<10ms) responses upon TMS. We focus on the generation of I-waves by activation of neurons that project to layer 5 (L5) corticospinal neurons. We use realistic compartment models to simulate spatiotemporal spiking dynamics on the axonal arbors of presynaptic neurons. This output is coupled into L5 cells according the morphologies of presynaptic axonal and postsynaptic dendritic trees. The resulting current entering L5 somata defines an average current input to a neural mass model. We explore their sensitivity towards model parameters using a generalized polynomial chaos (gPC) approach.

Results
Fig. 1A-C show the resulting modeling pipeline. The output activity of L5 neurons due to stimulation of upstream L2/3 neurons is presented in Fig. 1D. We observe a strong directional dependency at low and medium intensity, decreasing at higher intensities, which agrees with experimental and modeling results (Souza et al., 2022; Weise et al., 2023). A gPC surrogate of the activity function using 4000 model evaluations with random parameter distributions resulted in a normalized root mean square deviation of 1.9% tested against 1000 independent verification runs.
The average Sobol indices revealed the most influencing parameters and combinations thereof, i.e. E/I balance (42%), stimulation intensity (13%), and a combination of both (14%).




Discussion
The model provides the basis for modeling TMS evoked activity using parsimonious NMM with high biological detail. Previous coupling models were based on coarse approximations and ignored the complex mechanisms of how TMS activates neuronal populations. The model pipeline can also be adapted to other brain stimulation methods such as tDCS. The calculated surrogate models will be provided for download in order to allow efficient calculation of the input currents to L5 PC.






Figure 1. A: Parameters of the TMS induced electric field; B and C: Illustration of the model pipeline - induced e-field acts on terminals of presynaptic axons. Spreading of activity in axonal arbors is captured by the axonal delay kernel. The postsynaptic synapto-dendritic delay kernel accounts for extra position-dependent delay and yields current entering soma; D: Resulting input current to L5 PC over tim
Acknowledgements
The publication was supported by BMBF grant 01GQ2201 (KW, TRK).
References
K. Weise, T. Worbs, B. Kalloch, V.H. Souza, A.T. Jaquier, W. Van Geit, A. Thielscher, T.R. Knösche: Directional Sensitivity of Cortical Neurons Towards TMS Induced Electric Fields. Imaging Neuroscience 1: 1–22 (2023)


V.H. Souza, J.O. Nieminen, S. Tugin, L.M. Koponen, O. Baffa, R.J. Ilmoniemi: TMS with fast and accurate electronic control: Measuring the orientation sensitivity of corticomotor pathways. Brain Stimulation 15(2), 306–315 (2022)
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P201: HoloNeV: Holographic visualization tool for neural network activity
Monday July 7, 2025 16:20 - 18:20 CEST
P201 HoloNeV: Holographic visualization tool for neural network activity

Safeer A. Mirani1, Pirah Menon1, Rosanna Migliore2, Michele Migliore2, Beniamina Mercante2, Paolo Enrico1, Sergio MG Solinas1*


1Department of Biomedical Sciences, University of Sassari, Sassari, SARDINIA ITALY
2Institute of Biophysics, National Research Council, Palermo, Italy



*Email: smgsolinas@uniss.it

Introduction
Recent neuroscience initiatives have generated extensive data on neuroanatomy and brain function, enabling the development of detailed models of brain areas.While tools like NetPyne[1]and PyNN[2]facilitate network design on NEST[3]or NEURON[4], visualization tools are lagging behind. Emerging 3D holographic devices are bridging the digital and physical worlds instancing interactive virtual objects in the real world, i.e. Mixed Reality. Here, we introduce HoloNeV, a high-performance tool for visualizing and interacting with 3D neural network models. We validate the HoloNeV on a neuronal dataset derived from the CA1 region of the mouse hippocampus[5], encompassing the somata of pyramidal cells and interneurons divided into four neuronal layers.


Methods
We developed HoloNeV in Unity 3D (Version 2022), a game development platform designed for dynamic and high-resolution of complex animations to run on the Microsoft Hololens 2 headset using the Mixed Reality Toolkit for MR integration. We leverage on the technique of GPU instancing to enhance visualization performance, enabling the rapid rendering of numerous neuron representations simultaneously, up to 300.000 somata. We designed a custom low level code to control the data management in the GPU, a.k.a. shader, which supports GPU-accelerated rendering with stereo capabilities, transforming neuron positions and activations into an immersive visual experience.Tests were run on a workstation Intel Xeon w9-3495X with 128 GB RAM and GPU Nvidia RTX A5500.


Results
Following successful hardware integration and software development, the system was tested using the brain hippocampus dataset visualized through Microsoft HoloLens 2. The immersive 3D model allows exploration of neuronal organization in the CA1 region. The visualization includes a 3D distribution of neurons, density patterns, and layer-wise organization and is able to replay the neuronal activity from stored spike trains. While using standard state of art Unity tools the performance was 10 FPS, HoloNeV performance testing showed a mean frame rate of 97.8 FPS, ensuring a comfortable user experience in mixed reality.


Discussion

We introduce HoloNeV, a mixed-reality tool for visualizing neural network activity using an holographic headset. This system allows researchers to interact with neural networks in 3D while staying aware of their physical surroundings. Key innovations include stereo rendering optimization and fine hand-tracking for direct manipulation. Researchers can customize visualization parameters such as neuron size and density in real time. Although currently limited to representing the soma without axon and dendrite, future developments will address the full neuronal morphology, the configuration of neuronal parameters, real-time data streaming from HPC facilities, paving the way for new insights in neuroscience research.




Acknowledgements
Project IR00011 EBRAINS-Italy - Mission 4, “Istruzione e Ricerca” - Component 2, “Dalla ricerca all impresa” - Line of investment 3.1 of PNRR, Action 3.1.1 NextGeneration EU (CUP B51E22000150006) awarded to P. E. and S. S., the “FeNeL” project, PNRR M4.C2.1.1 – PRIN 2022 – No. 2022JE5SK2 – CUP G53D23000380006 awarded to S. S., Project “Numeracy in Aging (NiA) CUP J53D23017580001 awarded to P. E.
References
1. Dura-Bernal, S. et al. (2018) https://doi.org/10.1101/461137
2. Davison, A. P. (2008). https://doi.org/10.3389/neuro.11.011.2008
3. Gewaltig, M.-O., & Diesmann, M. (2007). https://doi.org/10.4249/scholarpedia.1430
4. Hines, M. (2009). https://doi.org/10.3389/neuro.11.001.2009

5. Gandolfi Daniela, et al. (2022).https://doi.org/10.1038/s41598-022-18024-y
Speakers
avatar for Rosanna Migliore

Rosanna Migliore

Researcher, Istituto di Biofisica - CNR
Computational NeuroscienceEBRAINS-Italy Research Infrastructure for Neuroscience    https://ebrains-italy.eu/
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P202: Implementation of an SNN-based LLM
Monday July 7, 2025 16:20 - 18:20 CEST
P202 Implementation of an SNN-based LLM

Tomohiro. Mitsuhashi*1, Rin. Kuriyama1, Tadashi. Yamazaki1

1Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan

*Email:m2431154@gl.cc.uec.ac.jp

Introduction

Large language models (LLMs) are indispensable in everyday life and business, yet their training and inference demand an enormous amount of electricity. A major contributor to this consumption is the extensive memory access in artificial neural network (ANN) models. A potential solution is to use neuromorphic hardware, which emulates dynamics of spiking neural networks (SNNs)[1]. SpikeGPT has been proposed as an SNN-based LLM[2]. However, not all components in SpikeGPT are implemented by spiking neurons. In this study, we aimed to implement a fully spike-based LLM based on the present SpikeGPT.

Methods
SpikeGPT consists of two blocks: Spiking RWKV and Spiking RFFN (Fig. 1A). These blocks consist of a component that performs analog computation and another component that converts the results into spike sequences by an SNN. We replaced the former component with an SNN by using the method proposed by Stanojevic[3], that uses spike timing information (Time-to-First Spike, TFS) (Fig. 1B), where an analog value is represented by the time that a neuron emits a spike for the first time. Eventually, we developed SNN-based RWKV and SNN-based RFFN (Fig. 1A). Moreover, nonlinear processes, including the calculation of an exponential function, were approximated using multi-layer SNNs, enabling the entire processing to be implemented solely with SNNs.
Results
We completed implementation of an SNN-based LLM, ensuring that both the RWKV and RFFN blocks are SNNs. Our SNN-based LLM should have generated the same sentences with the original SpikeGPT, but it generated completely broken sentences. We performed a quantitative comparison between analog computation values and approximated ones represented by spike timing, and found discrepancies between them. Namely, the nonlinear processes by SNNs did not work well. Then, we reverted SNN-based nonlinear processes with the original analog versions. We were able to obtain readable sentences, although the sentences were still different (Fig. 1C). Notably, we confirmed that each neuron emitted at most one spike during text generation (Fig. 1D).


Discussion
We implemented an SNN-based LLM that generates sentences. Nonetheless, our SNN-based nonlinear processes need to be improved for better approximation. One possible way is to set the temporal resolution of the SNNs much smaller for finer precision of analog values represented by TFS. Meanwhile, each neuron has at most one spike for each propagation, combining our model with neuromorphic hardware could lead to significant energy savings. These advances are expected to address challenges associated with energy-efficient LLMs.



Figure 1. Overview of our SNN-based LLM and sample results. (A) The architecture of SpikeGPT (left) and our model (right). (B) Schematic of the TFS approach, where the temporal difference between the time parameter and the spike time encodes an analog value. (C) A sample sentence generated by our model. (D) Raster plots for the SNN-based RWKV and SNN-based RFFN during token generation.
Acknowledgements
This study was supported by MEXT/JSPS KAKENHI Grant Numbers JP22H05161, JP22H00460.
References
1. Davies, M., et al. (2021). Advancing neuromorphic computing with Loihi: A survey of results and outlook.Proceedings of the IEEE, 109(5), 911–934.https://doi.org/10.1109/JPROC.2021.3067593


2. Zhu, R.-J., et al. (2024). SpikeGPT: Generative pre-trained language model with spiking neural networks.arXiv preprint.https://arxiv.org/abs/2302.13939


3. Stanojevic, A., et al. (2023). An exact mapping from ReLU networks to spiking neural networks.Neural Networks, 168, 74–88.https://doi.org/10.1016/j.neunet.2023.09.011
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P203: Reciprocity and Hierarchical Organization in the Resting-State Brain: Implications for Efficient Connectivity
Monday July 7, 2025 16:20 - 18:20 CEST
P203 Reciprocity and Hierarchical Organization in the Resting-State Brain: Implications for Efficient Connectivity

Guillermo Montaña-Valverde*1,2, Paula García-Royo2, Wolfram Hinzen1,3, Gustavo Deco2,3

¹ Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, 08018, Spain
² Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, 08018, Spain
³ Institució Catalana de Recerca i Estudis Avançats, ICREA, Barcelona, 08010, Spain

*Email: guillermo.montana@upf.edu


Introduction

While the brain is traditionally considered to have a strong hierarchical organization, our findings demonstrate that its structure is more flattened, facilitating more efficient information flow across the network [1]. Using a large resting-state and task-based fMRI dataset, we show that reciprocity – the tendency of a network to have bidirectional connections– strongly correlate with hierarchical organization. This suggests that the brain’s densely interconnected architecture flattens hierarchy, facilitating efficient information flow through shorter average path lengths and enhanced small-worldness. These results aligns with the idea that reciprocity enhances information flow via feedback connectivity [2]. This novel framework lays the foundation for our understanding of whole-brain functional dynamics [3].

Methods
We analyzed open-source fMRI data from the Human Connectome Project (HCP), comprising both resting-state and 7 task-based data from 1000 subjects. Generative effective connectivity (GEC) – an extension of the classic effective connectivity [4] –was estimated from whole-brain modeling for each subject in the DK80 parcellation [5], providing a directed weighted network [6,7]. Hierarchy was then determined by computing measures of coherence and trophic levels on the GEC (Fig. 1a). Reciprocity, defined as the fraction of total connection strength that is bidirectionally shared between regions, captures the balance between feedforward and feedback interactions [8] (Fig. 1b).
Results
We found that the brain in resting-state exhibits a high degree of reciprocity (0.93± 0.02, Fig. 1c), which shows a strong negative correlation with hierarchical coherence (corr=-0.97, p<0.001, Fig. 1d). Conversely, by artificially modulating for more asymmetric interactions, the hierarchy becomes more rigid (Fig. 1e). In addition, a more flattened hierarchy was associated with a shorter average path length (corr=0.97, p<0.001, Fig. 1f), higher average clustering coefficient (corr=-0.95, p<0.001, Fig. 1g), and increased small-worldness (corr=-0.99, p<0.001, Fig. 1h). Furthermore, decreased hierarchical coherence was observed during task performance (Fig. 1i).
Discussion
Overall, our results demonstrate that reciprocity plays a crucial role in shaping the brain’s hierarchical organization. The brain’s nature of high reciprocal connections facilitates information flow and integration, potentially optimizing cognitive processing both at rest and during task performance. On the contrary, a stronger hierarchy, reduces flexibility and adaptability, leading to a worsening in brain connectivity. For this reason, future research in this methodology should explore neuropsychiatric disorders, where changes in hierarchical organization of the brain may underlie altered brain processing. Ultimately, exploring whether targeted interventions that modulate reciprocity can restore optimal hierarchical organization and improve cognitive function.



Figure 1. A. The hierarchy was quantified measuring directedness based on trophic levels. B. Simplified representation of reciprocity. C. Reciprocity in the HCP resting-state dataset. D. Coherence and Reciprocity relation in HCP resting-state. E. Hierarchical representations for different reciprocities. F, G and H. Graph measures correlates with coherence. I. Coherence in 7 tasks compared to resting-state.
Acknowledgements
This study is part of the projectI+D+i Generación de ConocimientoPRE2020-095700, funded by MCIN/AEI/10.13039/501100011033.
References

DOI: 10.1038/s41583-023-00756-z

DOI: 10.1016/j.neuroimage.2020.117479

DOI: http://dx.doi.org/10.1038/s44220-024-00298-y

DOI: 10.1016/s1053-8119(03)00202-7

DOI: 10.1038/s41562-020-01003-6

DOI: 10.1016/j.neuron.2014.08.034

DOI: 10.1016/j.celrep.2020.108128
● DOI: 10.1038/srep02729




Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P204: Brain-like networks emerge from distance dependence and preferential attachment
Monday July 7, 2025 16:20 - 18:20 CEST
P204 Brain-like networks emerge from distance dependence and preferential attachment
Aitor Morales-Gregorio*1, Karolína Korvasová1

1Faculty of Mathematics and Physics, Charles University, Prague, Czechia

*Email: aitor.morales-gregorio@matfyz.cuni.cz
Introduction
Neurons in the brain are not randomly connected to each other. Neuronal networks have low density, high local clustering, short path lengths, heavy-tailed weight and degree distributions, and distance-dependent connection probability. These properties enable efficient information processing. However, standard network generating algorithms cannot produce networks to show all these brain-like properties. Here, we show that distance-dependent connection probability in combination with preferential attachment can generate brain-like networks that match the properties of the neuronal networks from six animal species: C. Elegans [1], Platynereis [2], Drosophila [3,4], Mouse [5,6], Marmoset [7], and Macaque [8].

Methods
Networks are created by iterative growth, mimicking how neurons would naturally grow. Neurons are randomly positioned inside a sphere, the distance between them calculated, and multiplied by the exponential kernel [9], thus creating the distance-dependent probability. An empty network is initialized, to which a new connection is added in each iteration drawn from the distance-dependent probability. The iteration stops when the target density is reached.
To achieve heavy-tailed distributions we study preferential attachment, i.e. a higher probability of connection for edges with high weight (weight-preferential) or nodes with high degree (degree-preferential).

Results
The neuronal networks of six animals have low density, high local clustering, short global path lengths, and heavy-tailed weight and degree distributions.
We show that distance dependence alone can create small-world networks with high clustering and short path lengths, but fails to produce heavy-tailed weight or degree distributions. Including weight-preferential attachment enables the creation of networks that also have heavy-tailed weight distributions, but not of the degrees. Finally, we show that degree-preferential attachment together with distance dependence produces brain-like networks that simultaneously have all the mentioned properties, and can match the experimentally measured networks of six different animal species.

Discussion
Our algorithm can match the properties of the neuronal networks of six different animals, suggesting these could be general principles of neural network development. It is well-known that neurons at large distances are less likely to be connected, in part because these connections are metabolically more expensive to establish and maintain than short-range ones. The large neuropil branching of some neurons increases the probability of connections with them, which we capture via the degree-preferential mechanism.
In conclusion, distance dependence and preferential attachment are biologically realistic mechanisms that can produce networks closely matching both invertebrate and vertebrate brains.




Acknowledgements
This work received funding from the Programme Johannes Amos Comenius (OP JAK) under the project 'MSCA Fellowships CZ - UK3' (reg. n. CZ.02.01.01/00/22\_010/0008220); and from Charles University grant PRIMUS/24/MED/007
References
[1] Varshney et al (2011) PLoS CB 7:e1001066
[2] Randel et al (2014) eLife 3:e02730
[3] Takemura et al (2013) Nature 500:175-181
[4] Scheffer et al (2020) eLife 9:e57443
[5] MICrONs Consortium et al (2021) bioRxiv 2021.07.28.454025
[6] Gămănuţ et al (2018) Neuron 97(3):698-715
[7] Majka et al (2020) Nature Communications 11:1133
[8] Markov et al (2014) Cerebral Cortex 24(1):17-36
[9] Ercsey-Ravasz et al (2013) Neuron 80(1):184-197
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P205: High-frequency oscillations in primate visual areas: Critical insights into neural population dynamics or mere spike artifacts?
Monday July 7, 2025 16:20 - 18:20 CEST
P205 High-frequency oscillations in primate visual areas: Critical insights into neural population dynamics or mere spike artifacts?
Katarína Studeničová*1, Aitor Morales-Gregorio1, Karolína Korvasová1

1Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic

*Email: katarina.studenicova@matfyz.cuni.cz


Introduction
Short bursts of high-gamma frequency oscillations (80-150 Hz) resembling hippocampal ripples were to date observed in several cortical areas [1-4]. However, their function and link to memory processes are unclear. The goal of our work is to describe the relationship between the high-gamma bursts, referred to as cortical ripples, and neuronal spiking activity in the same cortical location under different levels of drowsiness.
Methods
We analyze a dataset of the resting state activity consisting of 4 macaque monkeys ([5,6], and additional data provided by the authors), each having 16 Utah arrays implanted in visual areas V1, V2, V4, and IT. Monkeys sat in a dark room, and the vigilance of the recorded animals changed, ranging from fully alert to drowsy and light sleep. Raw traces were downsampled and filtered to the ripple band signal (80-150 Hz), and spikes were sorted. Short high-amplitude oscillatory bursts present in the ripple band, further referred to as cortical ripples, were detected by the standard double thresholding methods, and additionally confirmed by spectral analysis and surrogate methods.

Results
During the alert eyes open states without strong visual input, the dynamic of a network is unorganized in both space and time. However, with increasing drowsiness, the network falls into a global upstate-downstate regime. Upstates are strongly visible mainly in V1 and V2, and less organized in V4 and IT, possibly reflecting a different organizational structure of higher cortical areas. In all brain states, cortical ripples are accompanied by spiking activity. In general, spikes are locked to the phase of the ripple band. Most are locked to the trough, however, we also found cells preferring peaks of the oscillatory signal. We detail these findings further by describing a variety of spiking preferences with respect to the ripple band.

Discussion
To the best of our knowledge, we are the first to uncover the global organization of high-frequency oscillatory activity in the macaque visual areas during resting state, spanning large horizontal distances with intracortical recording precision. We prove the existence of cortical ripples in all the areas covered (previously literature only addressed V1 and V4) and describe the relationship between spikes and cortical ripples with respect to various brain states. We detail our findings by area-wise description, highlighting crucial differences. This work aims to bridge gaps between various recording techniques by providing a detailed view of network states underlying high-frequency oscillatory bursts.





Acknowledgements
This work received funding from the Charles University grant PRIMUS/24/MED/007; and the Programme Johannes Amos Comenius (OP JAK) under the project 'MSCA Fellowships CZ - UK3' (reg. n. CZ.02.01.01/00/22\_010/0008220).
References
[1] https://doi.org/10.1523/JNEUROSCI.0742-22.2022
[2] https://doi.org/10.7554/eLife.68401
[3] https://doi.org/10.1093/brain/awae159
[4] https://doi.org/10.1073/pnas.2210698120
[5] https://doi.org/10.1038/s41597-022-01180-1
[6] https://doi.org/10.1016/j.neuron.2024.12.003
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P206: Synaptic Topography Influences Dendritic Integration in Drosophila Looming Responsive Descending Neurons
Monday July 7, 2025 16:20 - 18:20 CEST
P206 Synaptic Topography Influences Dendritic Integration in Drosophila Looming Responsive Descending Neurons

Anthony Moreno-Sanchez*1, Alexander N. Vasserman1,HyoJongJang2, Bryce W. Hina2, Catherine R. von Reyn1 2, Jessica Ausborn1

1Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States.
2School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States.

*Email: am4946@drexel.edu
Introduction

Synapse organization plays a crucial role in neural computation, affecting dendritic integration and neuronal output [1, 2].InDrosophila melanogaster, visual projection neurons (VPNs) encode distinct visual features and relay retinotopic information from thelobulaandlobulaplate to descending neurons (DNs) in the central brain[3].DNsintegratespatially organizedvisualinformationfrom VPNsto elicitappropriatemotorresponses[4,5,6].However, the retinotopic organization of VPN-DN connections and its impact on dendritic integration remain unclear.Using electron microscopy (EM)data, computational modeling, and electrophysiology, we investigated how synaptic topography affects dendritic processing in looming-sensitive DNs.

Methods
We analyzed EM reconstructions ofDrosophilaVPN-DN circuits from theFull Adult Fly Brain(FAFB)dataset[7], using flywire.ai[8].Wedeveloped multicompartment models of 5 DNs with precise VPN synaptic locationsusingthe FAFB dataset.Using EM VPNmorphologies,we estimated the receptive fields of 6 VPN populations [4,9] and analyzed synapse organization on DN dendrites.We experimentally determined the spike initiation zone (SIZ), in our DNs of interest by tagging the endogenous voltage gated sodium channelpara.Passive properties of DNsweredeterminedusing whole-cell patch clamp electrophysiology data, by fittinghyperpolarizingexperimentalcurrent injections.Simulations were performed in the NEURONsimulationenvironment.
Results
VPN synapses formed spatially constrained clusters on DN dendritesbut lacked retinotopic organizationwithin the clusters.We found that DN morphologyand passive propertiesfilterexcitatory postsynaptic potentials (EPSPs)to achieve synaptic democracy,normalizingeach EPSPs impactattheSIZ.SimulationssuggestthatVPN synapsesfollow anear random distribution of synapsesavoiding tight clusters of synapses from individual neurons to avoid shunting.This synaptic topography,together with synaptic democracy,maintainsa linear relationship between synapse number and depolarization at the SIZ, both when activating individual VPNsand a small group of VPNs.
Discussion
DNs integrate retinotopic feature information from multiple VPN types, each targeting distinct dendritic regions.This organization strategy may enable DNs toselectively process visual features across the fly’s visual field for behavior-relevant computations.Our resultssuggestthat DN dendritic architecture and synaptic topography supports a quasi-linear integration model, in which synaptic democracy ensures consistent encoding of stimulus location via synapse numbers. These findings offer insights into synaptic organization principles andtheirrole in neural circuit function, highlighting the absence of retinotopic organization to prevent membrane shunting.



Acknowledgements
We thank Arthur Zhao forhelpwith the receptive field mapping, James M. Jeanne for help with the creation of dendrograms, and Thomas A. Ravenscroft for providing us with para-GFSTF tools for SIZ labeling.This study was supported in part by the National Institutes of Health (NINDS R01NS118562 to J.A. andC.R.v.R.), and the National Science Foundation (grant no. IOS-1921065 toC.R.v.R.).
References
1.doi:10.1038/s41583-020-0301-7
2.doi:10.1126/science.1189664
3.doi: 10.7554/eLife.21022
4.doi: 10.1038/s41586-023-05930-y
5.doi: 10.1038/nn.3741
6.doi: 10.1016/j.cub.2008.07.094
7.doi: 10.1016/j.cell.2018.06.019.
8.doi:10.1038/s41592-021-01330-0
9.doi: 10.7554/eLife.57685

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P207: From evolution to creation of spiking neural networks using graph-based rules
Monday July 7, 2025 16:20 - 18:20 CEST
P207 From evolution to creation of spiking neural networks using graph-based rules

Yaqoob Muhammad1, Emil Dmitruk1, Volker Steuber1, Shabnam Kadir*1

1Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom


*Email: s.kadir2@herts.ac.uk
Introduction

Predicting dynamics from synaptic weights of a neural network and vice versa is a difficult problem. There has been some success with Hopfield networks [1] and combinatorial threshold linear networks, which is a rate model [2], however not, to our knowledge, with spiking neural networks (SNNs). Usually, weights are obtained by a process of training, e.g. using STDP, surrogate gradient descent, evolutionary algorithms, etc.

In contrast, here we look at small spiking neural networks as initiated in [3] and formulate rules for the direct selection of both network topology and synaptic weights. This can reduce or eliminate the need for training or using genetic algorithms to derive weights.
Methods
We illustrate our approach on networks of minimal size consisting of Adaptive Exponential Integrate and Fire (AdEx) [6] neurons where the aim is for the network to recognise an input pattern of lengthkconsisting of distinct letters, following on from the work on [3]. The network must only accept this single pattern out ofkkpossible patterns. The network haskinterneurons and one output neuron.
In our initial experiments [4] a genetic algorithm was used to evolve both the topology and connection weights of SNNs encoded as linear genomes [5]. In [4] fork = 3, out of 100 independent evolutionary runs for 1000 generations each, 33 runs yielded perfect recognizers for a pattern of three signals.
Results
Fork > 6we used patterned matrices of the form seen in Figure 1. We have a tree with components consisting of leaves comprised of several nodes (matrix entries) which are all positive or negative, and a few key components that must take either maximal or minimal weights.
Using our new method we obtained networks that performed perfectly for up tok = 10(at the time of submission of this abstract). There appears to be no obstructions to the approach working for arbitrarily largek.
With randomly chosen weights andk = 6, evolution using a genetic algorithm took 500 gen-
erations before a perfect recogniser was found. In contrast, our approach using both handcrafted
topologies and weights required none or far fewer generations
Discussion
Our results are still very much conjectures based on observation, but they indicate that for SNNs there may be graph-based rules relating synaptic weights to function. The weights exhibit a relationship that is highly deterministic. Unlike in previous approaches, we do not require any restrictions on the form of the connectivity matrix, e.g. we do not need it to be symmetric as is required for stable fixed points of Hopfield networks, and we allow both excitatory and inhibitory connections, as well as autapses.
This is a first step towards developing a theory for modularity for SNNs, i.e. enabling the glueing of such networks whilst preserving properties, analogous to what was achieved for a variety of attractor types for CTLNs [2].



Figure 1. A) Sample connectivity matrix pattern for k = 10. The weights in the connectivity matrix have been ordered, with negative and positive weights being given by negative and positive integers respectively. B) Weights distribution (indexed by ordering from -25 to 30) in 10 matrices recognising the same pattern of length 10. C) Network activity for a sequence ABCDEFGHIJ - 10 interneurons and 1 output n
Acknowledgements
This research has received no external funding.
References
[1] https://doi.org/10.1073/pnas.79.8.2554
[2] https://doi.org/10.1137/22M1541666
[3] https://doi.org/10.1101/2023.11.16.567361
[4] https://doi.org/10.1162/isal_a_00121
[5] https://doi.org/10.1007/978-3-319-06944-9_10
[6] https://doi.org/10.1152/jn.00686.2005
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P208: Integration of a Purkinje cell model including morphological details with a bidirectional synaptic plasticity model
Monday July 7, 2025 16:20 - 18:20 CEST
P208 Integration of a Purkinje cell model including morphological details with a bidirectional synaptic plasticity model

Takeki Mukaida*1, Kaaya Akira1, Tadashi Yamazaki1

1Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan


*Email: takeki.mukaida@gmail.com
Introduction

Most neurons have large dendrites that span in space, on which various active ion channels are implemented. On the dendrites, synapses undergo plasticity depending on the postsynaptic membrane potential and calcium ion concentration. However, how the potential and concentration that can diffuse across dendrites affect the plasticity remains unresolved. To address this question, we used a multi-compartment Purkinje cell model that includes morphological details [1] and a biologically accurate plasticity model [2], and integrated them. Then, we performed a numerical simulation to examine the relationship between the spatial location and the directions of plastic change of synapses.

Methods
We used a multi-compartment Purkinje cell (PC) model, which comprises 1600 compartments classified into four types (soma, main dendrite, small dendrite, and spiny dendrite) [2]. To this model, we added compartments that represent spines. On each spine, we implemented a bidirectional synaptic plasticity model, composed of 18 differential equations, based on calcium ion concentration [1]. Sole activation of parallel fibers (PFs) increases the concentration slightly, resulting in long-term potentiation (LTP), whereas pained activation with a climbing fiber (CF) increases it largely, resulting in long-term depression (LTD).
Results
The maximum amplitude of excitatory postsynaptic currents (EPSCs) in each spine was investigated when PF and CF stimulation were applied. Spine compartments were attached to all 9 compartments of main dendrites and 165 compartments of smooth and spiny dendrites were randomly selected. PF stimulation was applied to all spines at 8 pulses of 150 Hz per second, whereas CF stimulation was applied only to the main dendrites at one pulse per second. After 300 seconds of stimulation, the maximum amplitude of EPSCs in each spine was measured. We observed that the maximum amplitude was lower than the initial value in spines close to the main dendrite but exceeded the initial value in spines far from the main dendrite (Fig. 1).
Discussion
The present result suggests that the direction of plasticity depends on the spatial location of the dendrites. Thus, the spatial location of spines that underwent either LTD or LTP implies the formation of clusters of spines that have the same direction of the plastic change. This may contribute to enhance the learning capability of a single neuron by harnessing the spatial distinction of the spines distributed across dendrites. Therefore, we will investigate whether neurons can use spatial shapes to realize complex learning such as pattern recognition and separation, while we will also incorporate experimental results to further enhance the learning capability.




Figure 1. Fig 1. The maximum amplitude of EPSCs in each spine after 300 seconds of stimulation.
AcknowledgementsThis study was supported by MEXT KAKENHI Grant Number JP22H05161.
References
● Pinto, T. M., Schilstra, M. J., Roque, A. C., & Steuber, V. (2020). Binding of Filamentous Actin to CaMKII as Potential Regulation Mechanism of Bidirectional Synaptic Plasticity by β CaMKII in Cerebellar Purkinje Cells. Scientific reports, 10(1), 9019. https://doi.org/10.1038/s41598-020-65870-9
● De Schutter, E., & Bower, J. M. (1994). An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. Journal of neurophysiology, 71(1), 375–400. https://doi.org/10.1152/jn.1994.71.1.375


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P209: Action Potential Propagation in Branching Axons
Monday July 7, 2025 16:20 - 18:20 CEST
P209 Action Potential Propagation in Branching Axons

Erin Munro Krull*1, Lucas Swanson1, Laura Zitlow1



1Ripon College, Ripon, WI, US
*Email:munrokrulle@ripon.edu



Introduction.Action potentials (APs) typically start near the soma and travel down the axon. Axons are not simple cables, and their morphology depends on the type of cell and whether there is axonal sprouting generally due to trauma [2]. Moreover, AP propagation down the entire axon is not always guaranteed and is known to depend on morphology [1, 3]. Predicting AP propagation in axons is a long-standing problem, where current theory can only predict propagation if axons are symmetric [4,5].



Methods.We use NEURON simulations to model AP propagation from an axon collateral to the end of the main axon. We vary the distance of the stimulated collateral to the initial segment (IS), and the lengths and distances of possible extra collaterals off the main axon. For each simulation, we find the threshold sodium conductance for AP propagation (gNaT).


Results.We show that the gNaT for axons with complex morphologies may be estimated linearly by the gNaT for simpler axons. For example, if we add an extra collateral then gNaT from the stimulated collateral goes up by a fixed amount: dgNaT. If we then estimate the effect of two extra collaterals by adding dgNaT for each collateral individually, the relative error is less than 0.7% (Fig. 1).


Discussion.This implies that we may predict whether an AP will propagate through a branching axon by simply adding the gNaT needed to propagate through a given path. Predictions for AP propagation using gNaT may give insight into the sodium conductance of an experimental cell as well as which cells may more easily propagate APs simply based on morphology. This work also gives insight into linearly decomposing results for a nonlinear PDE via a parameter.



Figure 1. Left) Calculated gNaT where the model has 2 extra branches with varying distance around the stimulated collateral’s location at 2 lambda from the IS. Right) Difference between calculated gNaT and estimated gNaT using data with no branches and 1 extra branch.
Acknowledgements
This research was supported by the NSF MSPRF, Beloit College Sanger Scholars, Beloit College Summer Scholars, and the Ripon College SOAR program.

References


https://doi.org/10.1523/JNEUROSCI.0891-17.2017

https://doi.org/10.1113/jphysiol.2002.037812

https://doi.org/10.1038/s41598-017-09184-3

https://doi.org/10.1017/CBO9780511623271


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P210: Biochemically detailed modelling of cortical synaptic plasticity: the effects of timing of neuromodulatory inputs on LTP/LTD
Monday July 7, 2025 16:20 - 18:20 CEST
P210 Biochemically detailed modelling of cortical synaptic plasticity: the effects of timing of neuromodulatory inputs on LTP/LTD

Tuomo Mäki-Marttunen*1, Verónica Mäki-Marttunen2

1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
2NORMENT, Division of Mental Health and Addiction, Oslo University Hospital, Institute of Clinical Medicine, University of Oslo, Norway

*Email: tuomo.maki-marttunen@tuni.fi
Introduction

Synaptic plasticity is a time-sensitive phenomenon. The timing of action potentials in pre- and postsynaptic neurons is well known to influence plasticity outcomes through Hebbian spike-timing-dependent plasticity mechanisms. It is also known that the neuromodulatory state of the neuron strongly affects plasticity [1]. However, it is not fully understood how the timing of neuromodulatory inputs to the cells affects plasticity outcomes [2]. Neuromodulatory activity is important for learning and memory consolidation [3], and thus, understanding how neuromodulatory inputs interact with neuronal activity will allow to gain a deeper view of how brain plasticity is regulated at a higher level [4-6].
Methods
Here, we use a multi-pathway model of synaptic plasticity in the cortex [7-8] to study the interaction between the timing of neuromodulatory and Ca2+inputs to the postsynaptic spine in shaping synaptic plasticity. We investigate how different forms of plasticity are affected by the exact timing of neuromodulatory inputs from the locus coeruleus, which is the main source of norepinephrine (NE) in the mammalian brain, relative to high-frequency Ca2+inputs.
Results
We show that when Ca2+inputs are followed by NE inputs, strong LTP can be observed, whereas LTD occurs when Ca2+inputs followed NE inputs. This effect is caused by a difference in the amount of cAMP produced and PKA activated between the two stimulation protocols: the Ca2+-> NE protocol induces strong PKA activation and GluR1 exocytosis, while the NE -> Ca2+protocol yields much smaller PKA activation.
Discussion
Animal studies suggest that the timing of fast activation of neuromodulatory centers is important [9] and may play a role in the oscillatory processes that underlie memory consolidation during sleep [10]. In addition, recent studies suggest that neuromodulatory activity at slower time scales during sleep presents a timed relation with oscillatory events underlying memory consolidation [11-12]. Our results suggest that a timely activation of locus coeruleus within a wave of brain activity can be crucial for the plasticity outcome, which can have important implications for our understanding of learning and memory consolidation.



Acknowledgements
Funding: Academy of Finland (330776, 358049). The authors also wish to acknowledge CSC Finland (project 2003397) for computational resources.
References

[1] https://doi.org/10.1016/j.neuron.2007.08.013
[2]https://doi.org/10.1038/s41583-020-0360-9
[3]https://doi.org/10.1016/j.neuron.2023.03.005
[4] https://doi.org/10.3389/fnsyn.2016.00038
[5] https://doi.org/10.3389/fncom.2018.00049
[6] https://doi.org/10.3389/fncir.2018.00053
[7] https://doi.org/10.7554/eLife.55714
[8]https://doi.org/10.1073/pnas.231251112
[9] https://doi.org/10.1016/j.conb.2015.07.004
[10] https://doi.org/10.1093/cercor/bhr121
[11] https://doi.org/10.1038/s41593-022-01102-9
[12] https://doi.org/10.1016/j.cub.2021.09.041
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P211: Modeling Burst Variability in Beta-Gamma Oscillations Through Layer-Specific Inhibition
Monday July 7, 2025 16:20 - 18:20 CEST
P211 Modeling Burst Variability in Beta-Gamma Oscillations Through Layer-Specific Inhibition

Manoj Kumar Nandi*1,2 , Farzin Tahvili1,2, Clément Gossi-Denjean1,2, Emmanuel Procyk1,2, Charlie Wilson1,2, Matteo di Volo1,2


1Université Claude Bernard Lyon 1, Lyon, Rhône-Alpes, France
2INSERM U1208 Institut Cellule Souche et Cerveau, Bron, France

*Email: manoj.phy09@gmail.com, manoj-kumar.nandi@univ-lyon1.fr

Introduction:Cognitive functions rely on collective neuronal oscillations, captured by EEG/LFP. Beta (13-30 Hz) and gamma (30-100 Hz) oscillations are linked to cognition [1]. These oscillations occur in short bursts with variable frequencies, challenging trial averages and simplified models. In Ref. [2] we showed spiking and neural mass models reproduce gamma bursts but not variability. In a recent study using the adaptive exponential integrate-and-fire (AdEx) model with different percentages of somatostatin (SOM) and parvalbumin (PV), we have shown that SOM/PV density affects oscillation frequencies [3]. Experimental study shows, PV/SOM variability exists across layers [4]. Using AdEx, we model layer-specific inhibitory variability to explain burst frequency/power variability.

Methods:We analyze experimental LFP data using time-frequency spectrogram analysis to identify bursts, defined as oscillatory events lasting at least two cycles with power exceeding six times the median at that frequency. We extract burst features, including peak frequency, peak power, mean power, duration, frequency span, time of peak power, and burst size. Machine learning methods are applied to assess how these features relate to cognitive processes. We use a computational spiking network model based on the adaptive exponential integrate-and-fire (AdEx) model, incorporating layer-specific variability in inhibitory populations. This allows us to simulate burst dynamics observed in experimental data and explore how different inhibitory neuron densities influence oscillatory behavior.
Results:From the experimental signal, we first calculate the averaged beta and gamma power in the lateral prefrontal cortex (LPFC) across layers. As shown by Ref. [5], we also observed a crossover of powers across layers, where beta power dominates in deep layers, and gamma power dominates in superficial layers. The extracted burst power follows this trend, validating the burst extraction process. Using our model, we replicate this behavior, demonstrating the role of varying inhibitory neuron densities in different cortical layers.
Discussion:Our model can exhibit burst dynamics across beta to gamma bands as observed in experimental data. Introducing distinct inhibitory populations (SOM, PV) predicts a cortical hierarchy where increased SOM/PV densities lower oscillation frequencies. Layer-wise modeling reveals burst-like features resembling experimental data. These findings highlight the importance of inhibitory diversity in shaping oscillatory dynamics and suggest that layer-specific variability plays a key role in modulating neural activity across frequency bands.





Acknowledgements
This work is supported by the French Ministry of Higher Education (Ministére de l’Enseignement Supérieur) and the project LABEX CORTEX (ANR-11-LABX-0042) of Université Claude Bernard Lyon 1 operated by the ANR.
References
[1]https://doi.org/10.1016/j.neuron.2016.02.028
[2]https://doi.org/10.3389/fncom.2024.1422159
[3]https://doi.org/10.1101/2025.02.23.639719
[4]https://doi.org/10.1038/nn.3446
[5]https://doi.org/10.1038/s41593-023-01554-7


Speakers
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P212: Delayed feedback and the precision of a neural oscillator in weakly electric fish
Monday July 7, 2025 16:20 - 18:20 CEST
P212 Delayed feedback and the precision of a neural oscillator in weakly electric fish


Parisa Nazemi*1,2, John Lewis1,2

¹ Department of Biology, University of Ottawa, Ottawa, Canada
² Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada

*Email: pnaze017@uottawa.ca


Introduction


Precision and reliability of neural oscillations are critical for many brain functions. Among all known biological oscillators, the electric organ discharge (EOD) in wave-type electric fish is the most precise, with sub-microsecond variations in cycle periods and a coefficient of variation of CV ~ 10⁻⁴[1]. The timing of the EOD is set by a medullary pacemaker network comprising 150 neurons with weak electrical coupling. How this pacemaker network achieves such high precision is not clear. One hypothesis is that pacemaker activity is regularized by electrical feedback from the EOD itself.
Methods
To investigate this, we use a computational model of a pacemaker neuron [2] with a delayed auto-feedback current stimulus. The stimulus waveform was chosen to mimic the electric field effects of the EOD. We also use a simple pulse stimulus for comparison.
Results
Our results show that feedback either increases or decreases the CV of the period, depending on the phase of delay:some delays led to low CV (regular oscillations) and others resulted in high CV (variable oscillations), corresponded to distinct regions of the phase response curve (PRC), witha clear relationship between CV and PRC slope (Fig. 1). Specifically, phases associated with the lowest CV (φ_L) and highest CV (φ_H) are near the points where the PRC crosses 1 with positive and negative slopes, respectively.We also tested other neural models [3, 4] with different PRCs and found that, as long as the PRC was type II, the results were similar.
Discussion

These findings provide insights into how time-delayed feedback influence the regularity and sensitivity of neural oscillations. The positive slope of the PRC suggests greater stability and promotes regularity for repeated fixed-delay stimulation. This mechanism could explain how the pacemaker network in weakly electric fish maintains exceptional regularity. More broadly, our findings suggest that feedback-driven stabilization may be a general principle for ensuring precise timing in biological oscillators.



Figure 1. Figure 1. Relationship between the coefficient of variation (CV) of periods with the phase response curve (PRC). A: normalized CV of periods. B: PRC. φ_L and φ_H mark the intersections of the PRC with the baseline at 1, where the slopes are positive and negative, respectively.
Acknowledgements
Supported by an NSERC Discovery Grant to Dr. John Lewis
References

1.https://doi.org/10.1073/pnas.95.8.4684
2.https://doi.org/10.1038/s41598-020-73566-3
3. https://doi.org/10.1523/JNEUROSCI.2715-06.2007
4. https://doi.org/10.7551/mitpress/2526.001.0001


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P213: logLIRA: a novel algorithm for intracortical microstimulation artifacts suppression
Monday July 7, 2025 16:20 - 18:20 CEST
P213 logLIRA: a novel algorithm for intracortical microstimulation artifacts suppression

Francesco Negri*1, David J. Guggenmos2, Federico Barban1,3

1Department of Informatics, Bioengineering, Robotics, System Engineering (DIBRIS), University of Genova, Genova, Italy

2Department of Rehabilitation Medicine and the Landon Center on Aging, University of Kansas Medical Center, Kansas City, KS, United States

3IRCSS Ospedale Policlinico San Martino, Genova, Italy


*Email: francesco.negri@edu.unige.it


Introduction
Intracortical microstimulation is a key tool to study neuropathologies and ultimately develop novel therapies [1, 2]. The analysis of short-latency evoked activity is essential to understand cortical reorganization driven by targeted electrical pulses [1, 3]. However, large voltage fluctuations known as stimulation artifacts hinder recording and analysis of neural response [4-6]. Existing rejection methods struggle with high spatially and temporally variable stimulus artifacts or rely on restrictive assumptions (e.g., absence of signal saturation) [5-8]. We propose a novel algorithm using piece-wise linear interpolation of logarithmically distributed points, alongside a framework to generate a semisynthetic dataset for benchmarking.

Methods
Our method, logLIRA, begins with a 1 ms blanking interval, dynamically extended to the end of signal saturation if present. Interpolation points are then sampled logarithmically, ensuring denser sampling where the signal changes rapidly. Piecewise linear interpolation estimates the artifact which is later subtracted. Possibly remaining secondary artifacts are mitigated by clustering the first 2 ms of recovered signals across trials, averaging and subtracting highly time-locked components. Finally, trial discontinuities are adjusted, and the same spike detection is applied to both ground truth and cleaned data for comparison.



Results
We evaluated logLIRA against three stimulus artifact rejection algorithms (dynamic averaging [5], global polynomial fitting [10], and SALPA [4]) using a semisynthetic dataset as ground truth. Root-mean-square error and cross-correlation at zero lag were calculated for varying mean firing and artifact rates. SALPA and logLIRA outperformed their competitors, excelling in both metrics (Fig. 1A). Notably, logLIRA significantly reduced the blanking interval duration (Fig. 1B), enabling better recovery of short-latency evoked responses while controlling secondary artifacts and thus false positives. Though not fully evident in the semisynthetic dataset lacking direct stimulus-spike correlation, this advantage is obvious in real data (Fig. 1C).



Discussion
With this work we introduced a reliable and effective method for the rejection of stimulus artifacts, highlighting the importance of handling secondary artifacts emerging from a reduced blanking interval or poor suppression due to numerous factors, including signal saturation. A trustworthy recovery of short-latency evoked activity is poised to greatly benefit neuroscientific research: logLIRA could improve the estimation of mesoscale effective connectivity by means of SEEC method [10], aiding in the understanding of cortex stimulation-driven functional reorganization [1, 3], and eventually enhancing the effectiveness of neuroprosthetic systems aimed at treating neuropathologies, improving the life quality of millions of patients [1-3, 11].






Figure 1. Performance comparison of stimulus artifact rejection algorithms on both semisynthetic and real data. A. Cross-correlation at zero lag for different values of mean artifact rate. B. Blanking intervals distribution for logLIRA and SALPA in the benchmark dataset. C. Example of recovered short-latency evoked activity from a real signal. The red vertical bars depict the 1 ms blanking interval.
Acknowledgements
Work supported by #NextGenerationEU (NGEU) and funded by the Italian Ministry of University and Research (MUR), National Recovery and Resilience Plan (PNRR), project MNESYS (PE0000006) - (DN. 1553 11.10.2022).
References
1.https://doi.org/10.1073/pnas.1316885110
2.https://doi.org/10.3389/fnins.2024.1363128
3.https://doi.org/10.1038/nature05226
4.https://doi.org/10.1016/S0165-0270(02)00149-8
5.https://doi.org/10.1016/j.jneumeth.2010.06.005
6.https://doi.org/10.1016/j.jneumeth.2024.110169
7.https://doi.org/10.1371/journal.pcbi.1005842
8.https://doi.org/10.1088/1741-2552/aaa365
9.https://doi.org/10.1088/1741-2552/ab7a4f
10.https://doi.org/10.1016/j.jneumeth.2022.109767

11.https://doi.org/10.3390/brainsci12111578
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P214: A Canonical Microcircuit of Predictive Coding Under Efficient Coding Principles
Monday July 7, 2025 16:20 - 18:20 CEST
P214 A Canonical Microcircuit of Predictive Coding Under Efficient Coding Principles

Elnaz Nemati*1, Catherine E. Davey1, Hamish Meffin1,2, Anthony N. Burkitt1,2

1Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia.
2Graeme Clark Institute, The University of Melbourne, Victoria, Australia.

*Email: fnemati@student.unimelb.edu.au

Introduction

Predictive coding describes how the brain integrates sensory inputs with expectations by minimizing expectation errors [1]. Studies show increased neural activity in cortical L2/3 during sensory mismatches [2], offering insights into disorders like autism [3] and perceptual phenomena such as visual illusions [4]. Canonical microcircuit models [5, 6] advance understanding but often overlook spiking dynamics, detailed inhibitory mechanisms, and Dale’s law adherence. They also neglect the distinct roles of cortical layers, especially L4 and L5/6 [7]. The Deneve framework [8] provides another perspective, modeling neurons as decoders where spikes are triggered if the membrane potentials, representing reconstruction errors, exceeds a threshold.

Methods
This study extends Deneve’s predictive coding framework [7] by assigning Gabor receptive fields to layer 4 neurons, creating a V1-like biologically inspired feature extractor. It introduces two-compartment neurons in layer 2/3 for prediction error signaling within a balanced E/I network. Our hierarchical model mirrors canonical circuits, using spiking neurons and simplified inhibitory populations: Parvalbumin (PV) and Somatostatin (SOM) inspired by Hertäg and Clopath [8]. Layer 5/6 contains similar neurons, generating predictions balanced by these populations (Fig.1a,b). Employing spiking neurons with Leaky Integrate-and-Fire dynamics, the model processes whitened images in ON/OFF channels, as in experimentally observed LGN responses.
Results
The model successfully results in L4 neurons displaying balance (Fig.1c), and orientation and phase selectivity (Fig.1d,e), thereby demonstrating biologically realistic V1 feature extraction. Layer 2/3 neurons robustly signal prediction errors across matched (FF=FB), mismatched (FF≠FB), feedforward-only (FF>FB), and feedback-only (FB>FF) conditions. Neuronal responses matched experimental evidence, where matched inputs minimized activity, while mismatched inputs elicited strong prediction-error signaling (Fig.1d). Critically, layer 5/6 neurons effectively integrated prediction errors from layer 2/3, significantly reducing sensory reconstruction errors and validating their predictive coding function.
Discussion
The model proposes that predictive coding effectively described cortical function through specific feedback interactions within canonical cortical circuits. It highlights the essential roles played by distinct neuronal compartments and inhibitory inter-neuron populations, specifically PV and SOM neurons, in modulating the balance. The close alignment of theoretical predictions with experimental observations supports the model's validity and enhances our understanding of cortical dynamics. Additionally, the model provides a robust foundation for future research in perceptual neuroscience, the development of neuromorphic systems, and the exploration of clinical interventions for disorders involving disrupted predictive coding mechanisms.




Figure 1. Fig 1: (a) Predictive coding microcircuit representation. (b) Detailed circuitry within each layer and connectivity. (c) Display of excitatory, inhibitory, and net currents showing balanced currents (d) Orientation Bias Index and (e) Phase Bias Index of Layer 4 excitatory populations. (f) Spike responses in Layer 2/3 to various feedforward and feedback inputs.
Acknowledgements
-
References
1.https://doi.org/10.1038/4580
2.https://doi.org/10.1016/j.neuron.2020.09.024
3.https://doi.org/10.1152/jn.00543.2015
4.https://doi.org/10.1016/j.neunet.2021.08.024
5.https://doi.org/10.1016/j.neuron.2012.10.038
6.https://doi.org/10.1016/j.neuron.2018.10.003
7.https://doi.org/10.1073/pnas.2115699119

8.https://doi.org/10.1038/nn.4243
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P215: Superior Temporal Interference Stimulation Targeting using Surrogate-based Multi-Goal Optimization
Monday July 7, 2025 16:20 - 18:20 CEST
P215 Superior Temporal Interference Stimulation Targeting using Surrogate-based Multi-Goal Optimization

Esra Neufeld*1, Cedric Bujard1, Melanie Steiner1, Fariba Karimi1,2, Niels Kuster1,2
1IT’IS Foundation, Zürich, Switzerland

2Swiss Federal Institute of Technology (ETH Zurich), Zürich, Switzerland

*Email: neufeld@itis.swiss

Introduction

Temporal Interference (TI) stimulation, an innovative formof transcranial electrical stimulation [1], uses multiple kHz currents with frequency offsets in the brain’s physiological range to steerably and selectivelystimulate deep targets. However, the complex and heterogeneous head environment, along with important inter-subject variability, make it challenging toidentify suitable stimulation parameters. An easy-to-use TI Planning (TIP)application was published [2] to facilitate study design and stimulation personalization. However, due to computational limitations, brute-force explorationof the full parameter space was not feasible, requiring users to impose pre-constrains. This oftenleads to suboptimal settings and making the tool lessaccessible for beginners.

Methods
TIP generates detailed head models from T1-weighted MRI data [3], co-registers the ICBM152 atlas [4] for target region identification, assigns DTI-based anisotropic conductivity maps [5], places electrodes according to the 10-10 system, and performs EM simulations to establish a full E-field basis. Surrogate based optimization (SBO) [6] combines an iteratively-refined Gaussian-process (GP) surrogate and a multi-objective genetic algorithm (MOGA) [7] to identify the front of Pareto-optimal conditions (electrode locations and currents), with regard to the goals of 1) maximizing stimulation strength and 2) selectivity, while 3) avoiding collateral stimulation.


Results
Based on the identified Pareto front, users can interactively weightthe three conflicting goals and compare configurations with comparable perfor-mances based on quantified quality metrics and visualized distributions. Theiterative SBO approachdramatically minimizes the number of full evaluationsrequired to predict the performance metrics (<100 instead of millions), enablingcomprehensive exploration of high-dimensional parameter spaces(5n − 1, n ≥ 2: number of channels).
Discussion
A fully automatic, online accessible tool for personalized TI stimulation planning has been established that leverages AI and image-based simulations. By introducing hybridized, iterative surrogate modeling and MOGA, systematic, comprehensive, and computationally tractable optimization in high-dimensional parameter spaces is achieved and interactive weighting of conflicting objectives becomes possible. The comprehensive search reduces the level of required user expertise, removes arbitrariness, and ensures identification of optimal conditions. The method readily generalizes to non-classic forms of multi-channel TI.





Acknowledgements
---
References
[1]https://doi.org/10.1016/j.cell.2017.05.024
[2] https://tip.itis.swiss
[3]https://doi.org/10.1088/1741-2552/adb88f
[4]https://doi.org/10.1098/rstb.2001.0915
[5]https://doi.org/10.1073/pnas.171473898
[6]https://doi.org/10.1007/978-3-642-20859-1_3
[7] https://doi.org/10.1007/978-981-19-8851-6_31-1
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P216: Modeling spreading depolarization in control and ischemic neocortical microcircuits using immunostained identify capillaries
Monday July 7, 2025 16:20 - 18:20 CEST
P216 Modeling spreading depolarization in control and ischemic neocortical microcircuits using immunostained identify capillaries

Adam JH Newton*1,Craig Kelley2,Siyan Guo3, Joy Wang3, Sydney Zink4, Marcello DiStasio4,5, Robert A McDougal3,5,6,7, William W Lytton1,8,9,10

Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, New York

Department of Biomedical Engineering, Columbia University, New York, NY.

Department of Biostatistics, Yale University, New Haven, CT, United States

Department of Pathology, Yale School of Medicine, New Haven, CT, United States

Wu Tsai Institute, Yale University, New Haven, CT, United States

Department of Biomedical Informatics and Data Science, Yale University, New Haven, CT, United States

Program in Computational Biology and Biomedical Informatics, Yale University, New Haven, CT, United States

Department of Neurology, SUNY Downstate Health Sciences University, Brooklyn, New York

Department of Neurology, Kings County Hospital Center, Brooklyn, New York

The Robert F. Furchgott Center for Neural and Behavioral Science, Brooklyn, New York
*Email: adam.newton@neurosim.downstate.edu








Introduction
Brain tissue requires a lot of energy to support the energy-intensive activity of neural information processing, particularly restoring ion homeostasis following action potentials. This high demand for energy leaves the system vulnerable to failures in homeostasis, such as spreading depolarization (SD). SDs are a wave of prolonged depolarizations preceded by a brief period of hyperexcitability that propagates through grey matter at 1-9mm/min [1]. Multiple neurological disorders can lead to SD, including migraine aura, epilepsy, traumatic brain injury, and ischemic stroke.



Methods
We modeled point neurons with Hodgkin-Huxley style channels augmented with homeostatic mechanisms, including Na+/K+-ATPase, NKCC1, KCC2, and dynamic volume changes. Astrocytic buffering was modeled as a field of oxygen-dependent and independent clearance of extracellular K+. Connectivity was based on prior models and the weights and the distribution of external drive were scaled to account for differences between conductance-based Integrate-and-Fire models[2,3]. A 2.0 x 2.3 cm cross-section of the human cortical plate in V1 with immunostaining for CD34, was used to determine the locations of 918 capillaries (mean capillary density: 199.6/cm2; mean±SD capillary cross-sectional area: 16.7±11.9μm2).


Results
We used NEURON/RxD/NetPyNE to simulate13,000 neurons representing ~1 mm3of mouse cortex (layers 2-6), monitoring the concentration of Na+, K+, Cl-, and oxygen, both intra- and extra-cellularly[4–7]. Spreading depolarization could be reliably triggered in each layer by elevating extracellular K+with differences in propagation speed between layers.


Discussion
We use this model to explore the hypotheses that vascular heterogeneity will lead to areas where neurons and astrocytes are well-supplied with oxygen and can better maintain normal activity following insult (increased extracellular K+or reduced perfusion). We also examined the mechanisms that could give rise to greater susceptibility and propagation speeds in the superficial layers compared with the deep cortical layers[8].




Acknowledgements
This research was funded by the National Institute of Mental Health, National Institutes of Health, grant number R01 MH086638, with HPC time fromNIH S10 award, 1S10OD032417-01, and the Yale Center for Research Computing McClearly cluster.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
References
1.https://doi.org/10.1007/s12028-021-01429-4
2.https://doi.org/10.1093/cercor/bhs3586
3.https://doi.org/10.1162/neco_a_01400
4.https://doi.org/10.3389/fninf.2022.884046
5.https://doi.org/10.3389/fninf.2018.00041
6.https://doi.org/10.7554/eLife.44494
7.https://doi.org/10.1523/ENEURO.0082-22.2022
8.https://doi.org/10.1177/0271678X16659496
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P217: The Generalized Activating Function: Accelerating Axonal Dynamics Modeling for Spinal Cord Stimulation Optimization
Monday July 7, 2025 16:20 - 18:20 CEST
P217 The Generalized Activating Function: Accelerating Axonal Dynamics Modeling for Spinal Cord Stimulation Optimization

Javier García Ordóñez*1,2, Taylor Newton1, Abdallah Alashqar3,4, Andreas Rowald3,4, Esra Neufeld1, & Niels Kuster1,5

1 IT’IS Foundation, Zürich, Switzerland
2 Zürich MedTech AG, Zürich, Switzerland
3 Department of Medical Informatics, Biometry and Epidemiology, Friedrich-Alexander-Universität Erlangen-Nürenberg, Erlangen, Germany
4 Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürenberg, Erlangen, Germany
5 Swiss Federal Institute of Technology (ETH Zurich), Zürich, Switzerland

*Email: ordonez@itis.swiss

Introduction

The classical Activating Function (AF) provides a fast, linear estimator for membrane polarization as a predictor for stimulation by extracellular electric potential exposure [1]. While computationally efficient, the classical AF fails to account for membrane leakage currents, diffusive interactions between adjacent axonal segments, complex fiber models (multi-cable with periaxonal and paranodal compartments), and the influence of stimulation waveform, limiting its accuracy and usefulness in complex neurostimulation scenarios​.

Methods
The Generalized Activating Function (GAF) is a biophysics-based predictor that overcomes these limitations while preserving computational efficiency. The GAF extends the classical framework by convolving the extracellular potential with a Green's function kernel to account for the dynamics of membrane polarization, including axial currents and membrane leakage​. A fast Fourier transform is used for the convolutions, producing spike predictions more than 1000× faster than conventional compartmental modeling​. The GAF’s formulation accurately predicts dynamic responses in complex fiber models, such as the McIntyre-Richardson-Grill myelinated fiber model [2].
Results
We first verified the GAF by reproducing benchmark experimental and computational data (e.g., strength–duration curves and diameter–dependent rheobase values for different fiber types). Next, we applied the GAF to a clinically validated, realistic model of spinal cord stimulation (SCS)[3]. The GAF’s spike predictions matched those of full electrophysiological simulations, with compute times reduced from hours to seconds​. Finally, we leveraged the GAF’s speed and efficiency to explore the design of superior stimulation waveforms and electrode configurations that enhance the selectivity and energy efficiency of SCS. GAF-guided pulse shape optimization discovered charge-balanced waveforms that double recruitment efficacy or reduced power consumption five-fold relative to commonly applied stimulation waveforms​.
Discussion
These results demonstrate that the GAF dramatically accelerates neurostimulation modeling without significant loss of accuracy, thereby facilitating large-scale explorations of stimulation parameters and the identification of personalized neuromodulation strategies. By bridging the gap between computational modeling and clinical practice, the GAF paves the way for optimized, patient-specific neurostimulation therapies.





Acknowledgements
No acknowledgements.
References
● https://doi.org/10.1152/jn.00353.2001
● https://doi.org/10.1109/TBME.1986.325670


● https://doi.org/10.1038/s41591-021-01663-5




Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P218: Using AI Technologies To Boost the Simulation and Personalization of Brain Network Models
Monday July 7, 2025 16:20 - 18:20 CEST
P218 Using AI Technologies To Boost the Simulation and Personalization of Brain Network Models

Alessandro Fasse1, Chiara Billi*1, Taylor Newton1, Esra Neufeld1

1 Foundation for Research on Information Technologies in Society (IT’IS), Zurich, Switzerland

*Email:billi@zmt.swiss

Introduction

Neural mass models (NMMs) approximate the collective dynamics of neuronal populations through aggregated state variables, facilitating large-scale brain network simulations and the calculation ofin silicoelectrophysiological signals. Despite their utility in exploring emergent network phenomena across brain states, scaling these models to whole-brain simulations imposes steep computational costs. To address this, we leveraged the inherent parallelism of NMM computations and their AI-like computational structure to develop a GPU-accelerated framework based on computation graphs.

Methods
Our framework simulates networks of Jansen-Rit (JR)-type NMMs [1] in both region- and surface-based configurations, while supporting weight matrix-based inter-region connectivity, local coupling definition, and stochastic integration. All JR model state variables are consolidated into a single PyTorch [2] tensor, enabling efficient batch processing across multiple GPUs with mixed-precision (16-, 32-, and 64-bit) support. Structuring the entire computation as a graph enables access to automatic differentiation, allowing gradient tracking throughout the simulation and gradient-based optimization of model parameters.
Results
We compared our framework’s performance to The Virtual Brain (TVB) [3], a widely used library for whole-brain NMM simulations. For modeling 10 seconds of stochastic activity in a 20k-node network, our method completed the task in 20 seconds, compared to 18 minutes with TVB - a ~55-fold speedup. The implementation was verified, e.g., by reproducing results from [4] on sharp transitions in network synchronization (measured by the Kuramoto parameter) as a function of the global coupling coefficient.
Discussion
Our findings highlight the feasibility of large-scale, high-fidelity neural mass simulations with runtimes suitable for online or iterative workflows. By leveraging computation graphs for parallel processing and automatic differentiation, the framework opens avenues for gradient-based parameter fitting and real-time state estimation. The replication of established emergent phenomena supports the model’s validity and suggests broader applications including pathological and adaptive networks. Future work will extend these capabilities to other NMM model types and explore integration with multi-scale brain models.



Acknowledgements
No acknowledgements.
References
● https://doi.org/10.1016/j.neuroimage.2023.119938
● "Automatic differentiation in PyTorch", A. Paska, S. Gross, S. Chintala, G. Chanan, Y. Edward, Z. DeVito, Z. Lin, A. Desmaison, L. Antigua and A. Lerer, 2017
● https://doi.org/10.3389/fninf.2013.00010
● https://doi.org/10.1371/journal.pone.0292910


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P219: Training the Drosophila Connectome as an Autoencoder - Reproduction of Direction Selectivity in T4/T5 Cells -
Monday July 7, 2025 16:20 - 18:20 CEST
P219 Training the Drosophila Connectome as an Autoencoder - Reproduction of Direction Selectivity in T4/T5 Cells -

Naoya Nishiura*1,Keisuke Toyoda1,Masataka Watanabe1

1The University of Tokyo, Tokyo, Japan

*Email: naoya-nishiura@g.ecc.u-tokyo.ac.jp

Introduction

Connectome-based modeling provides a powerful framework to investigate the inner workings of the biological brain, but its supervised training often relies on external labels unavailable in real environments [1]. In the Drosophila visual system, prior work employed a connectome-constrained network with optical flow as teaching signals [2], whereas, neural circuits in Drosophila does not have access to such labels. To address this limitation, we adopted an autoencoder-like strategy: a network reconstructing R1–8 neural responses. We confirmed development of direction selectivity T4 and T5 cells, leveraging only the brain’s innate connectivity [3].

Methods
We adopted the non-trained circuitry of the deep mechanistic network (DMN) of the Drosophila optic lobe [2], while removing the artificial network receiving optical flow as the teaching signal. Instead, we introduced a set of “phantom” R1–8 neurons that only receives feedback from the optical lobe. During training, the model’s sensory input was compared to the outputs of these phantom neurons via an L2 reconstruction loss. We preserved the native connectome structure, including the hexagonal columnar organization. Standard gradient-based optimization was used to update neuronal and synaptic parameters.
Results
After training, the DMN produced retinal-like activity patterns in its intermediate layers, effectively mapping spatial shadows across the hexagonal retinotopic array [4]. Notably, T4 neurons acquired direction-selective responses comparable to those observed in supervised settings, though preferred directions were not identical to biological measurements [2]. These results demonstrate that training of connectome-based autoencoder architecture leads to motion-selective T4 and T5 neurons, reproducing the functioning drosophila optical lobe.
Discussion
Our findings show that biologically plausible, connectome-constrained networks can self-organize fundamental visual computations through an autoencoder framework rather than providing explicit teaching signals [2]. By exploiting Drosophila’s neural connectivity and reconstructing phantom R1–8 neurons, the model reveals how intrinsic circuit architecture may lead to acquirement of direction selective cells. Our results illustrate the potential of training connectome networks under a biological plausible architecture, namely, auto-encoders, which may lead to near-ground-truth neural dynamics.




Acknowledgements
This work has been supported by the Mohammed bin Salman Center for Future Science and Technology for Saudi-Japan Vision 2030 at The University of Tokyo (MbSC2030) and JSPS KAKENHI Grant Number 23K25257.
References
[1]https://doi.org/10.3389/fncom.2016.00094
[2]https://doi.org/10.1038/s41586-024-07939-3
[3]https://doi.org/10.7554/eLife.40025
[4]https://doi.org/10.1073/pnas.1509820112

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P220: Ullam accusamus delectus et dolorum non ea libero reprehenderit
Monday July 7, 2025 16:20 - 18:20 CEST
P220 Ullam accusamus delectus et dolorum non ea libero reprehenderit

John A. Doe*1, Jane B. Smith2, Carlos M. Gonzalez1,3

1Department of Neuroscience, Example University, City, Country
2Institute for Brain Research, Another University, City, Country
3Center for Cognitive Science, Yet Another Institution, City, Country

*Email: john@univmail.com

Introduction


Methods

Results

Discussion





AcknowledgementsMaxime dolor blandit.
ReferencesQuos voluptatem magn.
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P221: Psilocybin Accelerates EEG Microstate Transitions and Elevates Approximate Entropy
Monday July 7, 2025 16:20 - 18:20 CEST
P221 Psilocybin Accelerates EEG Microstate Transitions and Elevates Approximate Entropy

Filip Novický1*, Adeel Razi2,3,4, Fleur Zeldenrust1


1Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, Nijmegen, 6525 AJ, Netherlands
2Turner Institute for Brain and Mental Health, Monash University, Clayton, 3168, Victoria, Australia
3Wellcome Centre for Human Neuroimaging, University College London, London, WC1N 3AR, UK
4CIFAR Azrieli Global Scholars Program, CIFAR, Toronto, Canada


*Email: filip.novicky@donders.ru.nl
Introduction

While psilocybin’s therapeutic potential is well-documented, its effects on brain function remain incompletely understood. The relaxed beliefs under psychedelics (REBUS) theory proposes that psychedelics weaken the brain's rigid thought patterns by reducing the influence of existing mental frameworks and hence increasing neural entropy [1]. This study investigated how psilocybin affects the brain’s spatiotemporal dynamics at the millisecond scale using EEG. In addition, this study examined whether its effects are modulated by mindfulness training and different cognitive states.


Methods
We analyzed EEG data from 63 participants (33 with mindfulness training, 30 without) during four conditions: video watching, resting state, meditation, and music listening, both before and after the consumption of psilocybin (19mg). Using EEG microstate analysis, we examined the temporal characteristics of four canonical brain states [2]. We complemented this with approximate entropy analysis to quantify signal complexity [3]. Statistical comparisons were performed across conditions, groups, and drug states with FDR correction.


Results
Psilocybin significantly altered brain dynamics during the eyes-closed conditions, increasing microstates’ occurrences rates while decreasing their duration. Mindfulness training showed no significant effect on these changes. Approximate entropy analysis revealed increased signal complexity, particularly during the eyes-closed states. While brain activity patterns primarily differed between eyes-open and eyes-closed states, psilocybin notably diminished the typical neural activity differences between passive rest and attentional states (meditation and music).


Discussion
Our findings support the REBUS theory’s prediction of an increased entropy of the neuronal activity under psychedelics, particularly during the eyes-closed states. The combination of increased microstate transition rates and elevated signal complexity suggests psilocybin creates a more dynamic and less constrained brain state, in agreement with previous studies [4]. Thus, the results of this study suggests that psychedelics can temporarily alter the brain’s typical processing patterns.





Acknowledgements
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 953327. This work benefited from the guidance of Adeel Razi's lab regarding the PsiConnect dataset.
References
1. https://doi.org/10.1124/pr.118.017160
2.https://doi.org/10.1016/j.neuroimage.2017.11.062
3.https://doi.org/10.1073/pnas.88.6.2297
4. https://doi.org/10.1038/srep46421
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P222: Computational framework for analyzing oscillatory patterns in neural circuit models
Monday July 7, 2025 16:20 - 18:20 CEST
P222 Computational framework for analyzing oscillatory patterns in neural circuit models

Nikita Novikov1*, Chelsea Ekwughalu1,2, Samuel Neymotin1,3, Salvador Dura-Bernal1,4

1Center for Biomedical Imaging & Neuromodulation, The Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA
2Department of Physics, Barnard College, New York, NY, USA
3Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, USA
4Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA

*nikknovikov@gmail.com
Introduction

Neural oscillations coordinate brain activity, with abnormal patterns linked to neurological disorders. Understanding their emergence from biological parameters is crucial for effective intervention. While biophysically detailed models provide mechanistic insight, their complexity makes direct analysis computationally expensive and mathematically intractable. To address this, we present a computational framework for the systematic exploration of key parameters governing oscillatory dynamics in large-scale neural networks.
Methods
Our approach relies on the eigenmode decomposition of frequency-dependent transfer matrices, originally proposed in [1] for LIF neurons. In contrast to [1], we do not derive transfer matrices analytically, but instead, we developed a toolbox for their numerical estimation, extending the method to arbitrary models. The estimation is done by automatic construction and simulation of surrogate models, where a single population remains intact, others are replaced by equivalent spike generators, and a sinusoidal signal is added to the probed input. The toolbox is built on top of the NetPyNE framework [2] and supports high-performance parallel simulations.
Results
We validated our approach on a simplified model of cortical layers 2/3 and 4, demonstrating that it accurately decomposes network activity into oscillatory modes and predicts the amplitudes and phases of the oscillations (Fig. 1A-C). Using the computed transfer matrices, we estimated the effects of synaptic weight perturbations by modifying the relevant transfer coefficients and analyzing the resulting eigenmodes, without needing full-model simulations. These predictions closely matched direct simulations of the perturbed model (Fig. 1D, E), confirming that our method reliably identifies key connections that shape oscillatory activity.
Discussion
We propose a framework for systematically exploring the relationship between biological parameters and emergent oscillations. Our tool estimates inter-population transfer coefficients through multiple independent simulations of simple surrogate models, a process well-suited for efficient parallelization. Once computed, these coefficients provide insight into the full model’s oscillatory modes and their sensitivity to parameter perturbations. Our results validate the approach, demonstrating its potential for analyzing neural circuits and informing future neurostimulation and pharmacological interventions.




Figure 1. Figure 1. A – power spectral densities (PSDs. B – eigenmode amplitudes. C – complex relations between L2e and other populations at 60 Hz; arrows – projections of the 1st mode onto populations; black – distribution of simulated instantaneous relations. D, E – effects of L2e->L2i weight perturbation on the 1st mode amplitude (D) and L2i PSD (E).
Acknowledgements
The work is supported by the grants: R01 MH134118-01, RF1NS133972-01, R01DC012947-06A1, R01DC019979, ARL Cooperative Agreement W911NF-22-2-0139, P50 MH109429
References
1. Bos, H., Diesmann, M., & Helias, M. (2016). Identifying Anatomical Origins of Coexisting Oscillations in the Cortical Microcircuit. PLOS Computational Biology, 12(10), e1005132. https://doi.org/10.1371/journal.pcbi.1005132
2. Dura-Bernal, S., Suter, B. A., Gleeson, P., Cantarelli, M., Quintana, A., Rodriguez, F., Kedziora, D. J., Chadderdon, G. L., Kerr, C. C., Neymotin, S. A., McDougal, R. A., Hines, M., Shepherd, G. M., & Lytton, W. W. (2019). NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife, 8, e44494. https://doi.org/10.7554/eLife.44494
Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

16:20 CEST

P223: Auto-adjoint method for Spiking Neural Networks
Monday July 7, 2025 16:20 - 18:20 CEST
P223 Auto-adjoint method for Spiking Neural Networks

Thomas Nowotny*1, James C. Knight1

1Department of Neuroscience, Example University, City, Country

*Email: t.nowotny@sussex.ac.uk


Introduction
It is important for the success of neuromorphic computing and computational neuroscience to be able to efficiently train spiking neural networks (SNNs). In 2021, Wunderlich and Pehle published the Eventprop algorithm [1], which is based on the adjoint method for hybrid continuous- discrete systems [2]. Eventprop casts the backward pass, which calculates the gradient of a loss function over an SNN, into a hybrid continuous- discrete system of the same nature as the forward dynamics of the SNN. Therefore, Eventprop can be implemented efficiently on both existing SNN software simulators [3] and digital neuromorphic hardware [4].


Methods

Here, we present new work in which we take Eventprop to the next level. The original Eventprop algorithm [1] was derived explicitly for the case of leaky integrate-and-fire (LIF) neurons and “exponential” synapses. The adjoint method for hybrid systems is much more general [2], and [5] already presents a more general set of equations. Here, we choose a level of generality that allows us to derive the general adjoint equations in a form that is explicit enough that the sympy symbolic math Python package can automatically generate code to simulate the equations.

Results
We assume that the neurons of the SNN being trained have internal dynamics described by ordinary differential equations and that their spiking condition and their reset behaviour are described by functions of the neurons’ variables. Finally, we assume that the action caused by an incoming spike entails adding to a neuron variable. Under these general assumptions, we derived a backward pass for adjoint variables like in the original Eventprop and implemented it into our mlGeNN spike-based machine learning framework [6] using sympy. We observe that for leaky integrate-and-fire neurons and exponential synapses, the new framework has the same performance on popular benchmarks as the previous version of standard Eventprop.


Discussion

We have created a new version of mlGeNN that, based on the generalised Eventprop method presented here, allows researchers to rapidly train SNNs with virtually any neuron dynamics using gradient descent with exact gradients. This includes more complex dynamics, such as Hodgkin-Huxley conductance-based models, opening new avenues for injecting function into computational neuroscience models. This new capability is akin to the auto-diff functionality of PyTorch, which has been instrumental in the recent AI revolution.



Acknowledgements
This work was partially funded by the EPSRC, grants EP/V052241/1 and EP/S030964/1.
References
[1] Wunderlich, T. C., & Pehle, C. (2021). Scientific Reports, 11(1), 12829.
[2] Galán, S., Feehery, W. F., & Barton, P. I. (1999). Appl. Num. Math., 31(1), 17-47.
[3] Nowotny, T., Turner, J. P., & Knight, J. C. (2025). Neurom. Comput. Eng., 5(1), 014001.
[4] Gabriel, B., Timo, W., Mahmoud, A., Bernhard, V., Christian, M., & Hector, A. G. (2024). arXiv preprint arXiv:2412.15021.
[5] Pehle, C. G. (2021). Adjoint equations of spiking neural networks (Doctoral dissertation).
[6] Turner, J. P., Knight, J. C., Subramanian, A., & Nowotny, T. (2022). Neurom. Comput. Eng., 2(2), 024002.

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

19:10 CEST

Banquet dinner
Monday July 7, 2025 19:10 - 21:40 CEST
Monday July 7, 2025 19:10 - 21:40 CEST
TBA
 
Tuesday, July 8
 

08:30 CEST

Registration
Tuesday July 8, 2025 08:30 - 19:00 CEST
Tuesday July 8, 2025 08:30 - 19:00 CEST

09:00 CEST

09:00 CEST

Brain-Inspired Computing
Tuesday July 8, 2025 09:00 - 12:30 CEST
Brain-inspired computing looks to mimic how the human brain works to improve artificial intelligence (AI) systems. This area has gained a lot of interest recently because it helps us create stronger and more efficient AI models while tackling challenges faced by current artificial neural networks.

This workshop will cover a range of topics, including biological neural networks, cognitive computing, and biologically-inspired algorithms. We will discuss how learning from the brain's structure and operations can lead to new solutions for complex issues in AI, machine learning, and data processing.

The workshop will include talks from experts in the field and interactive panel discussions. Participants will have the chance to collaborate, share ideas, and connect with others who are excited about using biological principles to advance technology.

Full program in this link.

Schedule
9:00 AM - 9:30 AM Speaker: Rui Ponte Costa, University of Oxford
A theory of self-supervised learning in cortical layers
9:30 AM - 10:00 AM Speaker: Guillaume Bellec, Vienna University of Technology
Validating biological mechanisms in deep brain models with optogenetic perturbation testing
10:00 AM - 10:30 AM Speaker: Guozhang Chen, Peking University
Characteristic differences between computationally relevant features of cortical microcircuits and artificial neural networks
10:30 AM - 11:00 AM Coffee Break
11:00 AM - 11:30 AM Speaker: Robert Legenstein, Graz University of Technology
Rapid learning with phase-change memory-based neuromorphic hardware through learning-to-learn
11:30 AM - 12:00 PM Speaker: Shogo Ohmae, Chinese Institute for Brain Researc
World-model-based versatile computations in the neocortex and the cerebellum
12:00 PM - 12:30 PM Speaker: Yuliang Zang, Tianjin University
Biological strategies for efficient learning in cerebellum-like circuits
12:30 End of Workshop and Lunch Break

Full program in this link.
Speakers
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 5

09:00 CEST

Enabling synaptic plasticity, structural plasticity, and mutil-scale modeling with morphologically detailed neurons using Arbor
Tuesday July 8, 2025 09:00 - 12:30 CEST
Current computational neuroscience studies are often limited to a single scale or simulator, with many still relying on standalone simulation code due to computational power and technology constraints. Simulations incorporating biophysical properties and neural morphology typically focus on single neurons or small networks, while large-scale neural network simulations often resort to point neurons as a compromise to incorporate plasticity and cell diversity. Whole-brain simulations, on the other hand, frequently sacrifice details at the individual neuron and network composition levels.
This workshop introduces recent advances leveraging the next-generation simulator Arbor, designed to overcome these challenges. Arbor enables seamless conversion from the widely used NEURON simulator, facilitates the study of functional and structural plasticity in large neural networks with detailed morphology, and supports multi-scale modeling through co-simulation, integrating microscopic and macroscopic levels of simulation.
Arbor is a library optimized for efficient, scalable neural simulations by utilizing both GPU and CPU resources. It supports the simulation of both individual neurons and large-scale networks while maintaining detailed biophysical properties and morphological complexity. The workshop will feature presentations covering key aspects:

Effortless Transition from NEURON to Arbor - Dr. Beatriz Herrera - Allen Brain Institute, USA
Introducing to the SONATA format, which simplifies the migration process and enables cross-simulator validation, ensuring a smooth transition to Arbor for researchers familiar with NEURON.

Structural Plasticity Simulations - Marvin Kaster & Prof. Felix Wolf - TU Darmstadt, Germany 
Presenting ReLEARN and Arbor’s capabilities in modeling distance-dependent structural plasticity, providing insights into structural changes.

Synaptic Plasticity -  Dr. Jannik Luboeinski - University of Göttingen, Germany
Showcasing Arbor’s capabilities in modeling calcium-based functional plasticity.

Multi-Scale Co-Simulation with TVB -  Prof. Thanos Manos - CY Cergy-Paris University, France
Demonstrating Arbor’s co-simulation with The Virtual Brain (TVB) platform, illustrating the study of epilepsy propagation as an example of multi-scale modeling.

The workshop will conclude with an interactive coding session, offering participants hands-on experience with Arbor and an opportunity to apply the presented concepts.
Speakers
avatar for Han Lu

Han Lu

postdoc, Forschungszentrum Jülich
Tuesday July 8, 2025 09:00 - 12:30 CEST
Belvedere room

09:00 CEST

Inference Methods for Neuronal Models: from Network Activity to Cognition
Tuesday July 8, 2025 09:00 - 12:30 CEST
The development of models for neuronal systems have matured in recent years and they exhibit increasing complexity thanks to computer resources for simulation. In parallel, the increasing availability of data poses the challenge to quantitatively related those models to data, going beyond reproducing qualitative activity patterns and behavior. Model inference is thus becoming an indispensable tool for unraveling the mechanisms underlying brain dynamics, behavior, and (dys)function. A critical aspect of this endeavor is the ability to infer changes across multiple scales, from neurotransmitters and synaptic interactions to neural circuits and whole-brain networks. Recent approaches that have been adopted by the neuroscience community include methods for directed effective connectivity (e.g. dynamical causal modeling), simulation-based inference on whole-brain models, and active inference for understanding perception, action and behavior. They have significantly enhanced our ability to interpret data by modeling underlying mechanisms and neuronal processes. This workshop will bring together experts from diverse fields to explore the state-of-the-art methodologies, taking specific applications as examples to compare them and highlight remaining challenges.

Speakers
avatar for Matthieu Gilson

Matthieu Gilson

chair of junior professor, Aix-Marseille University
avatar for Meysam Hahsemi

Meysam Hahsemi

Research Fellow
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 103

09:00 CEST

Mechanisms for Oscillatory Neural Synchrony
Tuesday July 8, 2025 09:00 - 12:30 CEST
https://www.medschool.lsuhsc.edu/cell_biology/cns_2025.aspx

CNS*2025 in Florence, Italy on July 08, 2025From 9:00 to 12:30This workshop will bring together researchers who have recently published on synchronization networks of coupled oscillators, with a mix of approaches but an emphasis on phase response curve (PRC) theory. The researchers come from both theoretical and experimental backgrounds. Topics include synchronization mechanisms for theta nested gamma in the medial entorhinal cortex, mean-field pulsatile coupling methods for fast oscillations in inhibitory networks, beta oscillations in parkinsonian basal ganglia, the relative contributions of synaptic and ultra-fast non-synaptic ephaptic coupling to the inhibition of cerebellar Purkinje cells by basket cells, infinitesimal macroscopic PRC (imPRC) within the exact mean-field theory applied to ING and PING, and robustness in a neuromechanical model of motor pattern generation.
Carmen Canavier,  LSU Health Sciences Center New Orleans:  “A Mean Field Theory for Pulse-Coupled Oscillators based on the Spike Time Response Curve”
Joshua A Goldberg, Hebrew University of Jerusalem:  “Empirical study of dendritic integration and entrainment of basal ganglia pacemakers using phase response curves”
Dimitri M Kullmann, University College London: “Basket to Purkinje Cell Inhibitory Ephaptic Coupling Is Abolished in Episodic Ataxia Type 1”
Hermann Rieke, Northwestern University: “Paradoxical phase response of gamma rhythms facilitates their entrainment in heterogeneous networks”
Yangyang Wang, Brandeis University: “Variational and phase response analysis for limit cycles with hard boundaries, with applications to neuromechanical control problems”
Brandon Williams, Boston University: “Fast spiking interneurons generate high frequency gamma oscillations in the medial entorhinal cortex”
Speakers
avatar for Carmen Canavier

Carmen Canavier

Mullins Professor and Department Head, LSU Health Sciences Center NO
Workshop on Mechanisms for Oscillatory Neural SynchronyCNS*2025 in Florence, Italy on July 09, 2025From 14:00 to 17:30This workshop will bring together researchers who have recently published on synchronization networks of coupled oscillators, with a mix of approaches but an emphasis... Read More →
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 6

09:00 CEST

Multiscale Modeling of Electromagnetic Field Perturbations on Neural Activity
Tuesday July 8, 2025 09:00 - 12:30 CEST
Speakers
AA

Alberto Arturo Vergani

Research Fellow, University of Pavia
Tuesday July 8, 2025 09:00 - 12:30 CEST
Hall 3B

09:00 CEST

Neuromodulation, sleep-dependent brain dynamics and information processing
Tuesday July 8, 2025 09:00 - 12:30 CEST
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 9

09:00 CEST

09:00 CEST

Cross-species modeling of brain structure and dynamics
Tuesday July 8, 2025 09:00 - 13:00 CEST
Speakers
JP

James Pang

Research Fellow, Monash University
Tuesday July 8, 2025 09:00 - 13:00 CEST
Auditorium

09:00 CEST

Advancing Mathematical Methods in Neuroscience Data Analysis
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
Brief Description: With the ever increasing amount of data acquired in neuroscience applications there is an essential need to develop computationally effective, robust, and interpretable data processing algorithms. Recent advancements in graph inference, topology, information theory and deep learning have shown promising results in analyzing biological/physiological data, as well as datasets acquired by intelligent agents. Combining elements from different disciplines of information theory, mathematics, and machine learning is paramount for developing the next generation of methods that will facilitate big data analysis under the realm of better understanding brain dynamics, as well as neuroinspired system dynamics in general. The goal of the workshop is to bring researchers working in data science, neuroscience, mathematics, and machine learning together to discuss challenges posed by analyzing multimodal data sets in neuroscience along with potential solutions, exchange ideas and present their latest work in designing and analyzing effective data processing algorithms. This workshop will serve as a great opportunity to discuss innovative future directions for neuroinspired processing of large amounts of data, while considering novel mathematical data models and computationally efficient learning algorithms.

Schedule:

9:00 - 9:40: Kathryn Hess, EPFL, Topological perspectives on the connectome
Abstract: 
Over the past decade or so, tools from algebraic topology have been shown to be very useful for the analysis and characterization of networks, in particular for exploring the relation of structure to function. I will describe some of these tools and illustrate their utility in neuroscience, primarily in the framework of a collaboration with the Blue Brain Project.

9:45 - 10:25: Moo Kyung Chung, University of Wisconsin, Topological Embedding of Dynamically Changing Brain Networks
Abstract:
We introduce a novel topological framework for embedding time-varying brain networks into a low-dimensional space. Our Topological Embedding captures the evolving structure of functional connectivity by mapping dynamic birth and death values of topological features (connected components and cycles) into a 2D plane. Unlike traditional analyses that rely on synchronized time-points or direct comparisons of network matrices, our method aligns the dynamic behavior of brain networks through their underlying topological features, thus offering invariance to temporal misalignments and inter-subject variability. Using resting-state functional magnetic resonance images (rs-fMRI), we demonstrate that the topological embedding reveals stable 0D homological structures and fluctuating 1D cycles across time, which are further analyzed in the frequency domain through the Fourier Transform. The resulting topological spectrograms exhibit strong associations with age and cognitive traits, including fluid intelligence. This study establishes a robust and interpretable topological representation for the analysis of dynamically changing brain networks, with broad applicability in neuroscience and neuroimaging-based biomarker discovery. The talk is based on arXiv:2502.05814

10:30 - 11:00: Coffee Break

11:00 - 11:40: Anna Korzeniewska, Johns Hopkins University, From causal interactions among neural networks to significance in imaging brain tumor metabolism.

Abstract: Neural activity propagates swiftly across brain networks, often not providing enough data-points to model its dynamics. This limitation can be overcome by using multiple realizations, or repetitions, of the same process. However, once repetitions have been consumed for modeling, or only one is available, the significance of the neural dynamics cannot be assessed using traditional statistical methods. We propose a new method for assessing statistical confidence using the variance of a smooth estimator and a criterion for the choice of a smooth ratio. We show their applications to event-related neural propagations among eloquent and epileptogenic networks, and to metabolite kinetics in hyperpolarized 13C MRI (hpMRI) of brain tumor. The event-related causality (ERC) method - a multichannel extension of the Granger causality concept – was applied to multi-channel EEG recordings to estimate the direction, intensity, and spectral content of direct causal interactions among brain networks. A two-dimensional (2D) moving average, with a rectangular smooth window, sliding over points in the time-frequency plane, provided the smooth estimator and its error for statistical testing. The smooth size of the 2D moving average was determined by the W-criterion, which combines the difference between the smooth estimator and the real values with the confidence interval. The same approach was applied to 2D images of hpMRI of pyruvate metabolism of malignant glioma. A newly developed bivariate smoothing model ensured precise embedding of ERC’s statistical significance in time-frequency space, revealing complex frequency-dependent dynamics of causal interactions. The strength and pattern of neural propagations among eloquent networks reflected stimulus modality, lexical status, and syllable position in a sequence, uncovering mechanisms of speech control and modulation. The strength and pattern of high-frequency interactions among epileptogenic networks identified seizure onset zones and unveiled propagations preceding seizure onset. Statistical confidence of the difference between metabolic responses of tumor and normal tissue, obtained through hpMRI, allowed tumor delineation. Moving average provides an efficient smooth estimator and its error (optimal for reducing random noise while retaining sharp step response) and ensures precise embedding of statistical significance in two-dimensional space. The new approach overcomes several limitations of previously used 2D spline interpolation (restraint to a mesh of knots introducing artifactual distributions of variance and significance, and failure to converge in some cases), while W-criterion provides efficient choice of smooth size. The new technique has broad applicability to neuroscientific research and clinical applications, including planning for epilepsy surgery, localizing anatomical targets for responsive neuromodulation, and gauging tumor treatment response.

11:45 - 12:25: Vasileios Maroulas, University of Tennessee Knoxville, The Shape of Uncertainty.

Abstract: How does the brain know where it is and where it is going? Deep within our neural circuits, specialized cells—like head direction and grid cells—fire in intricate patterns to guide spatial awareness and navigation. But decoding these patterns requires tools that can keep up with the brain’s complexity. In this talk, I will share how we wre using topological deep learning to do just that. Our new models tap into higher-dimensional structures to predict direction and position—without relying on hand-crafted similarity measures. But that is just the beginning. I will also introduce a Bayesian framework for learning on graphs using sheaf theory, where uncertainty is not a bug but a feature. By placing probability distributions on the rotation group and learning them through the network, we gain robustness, flexibility, and accuracy—especially when data is scarce. Together, these advances point to a bold new direction: using geometry and topology to unlock the brain’s code and reshape how we learn from complex data.





Speakers
VM

Vasileios Maroulas

Professor of Mathematics, University of Tennessee Knoxville
topological machine learning, Bayesian computational statistics, manifold learning
DB

Dave Boothe

Neuroscientist, Army Research Laboratory
IS

Ioannis Schizas

Research Engineer, Army Research Lab
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
Hall 2B

09:00 CEST

Linking structure, dynamics, and function in neuronal networks: old challenges and new directions
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
The full program including abstracts can be found here

https://sites.google.com/view/cns2025workshop-strudynfun/

We are looking forward to seeing you at the workshop.

Wilhelm Braun, Kayson Fakhar and Claus C. Hilgetag
Speakers
avatar for Wilhelm Braun

Wilhelm Braun

Junior Research Group leader, CAU Kiel, Department of Electrical and Information Engineering
CC

Claus C Hilgetag

Professor, University Medical Center Eppendorf Hamburg, Germany
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
Room 101

09:00 CEST

Computational strategies in epilepsy modelling and seizure control
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Epilepsy remains a complex neurological condition, necessitating innovative approaches to understanding
and mitigating seizure activity. This workshop is designed to bring together computational neuroscientists
and researchers with experimental and clinical background to explore cutting-edge strategies in epilepsy
modeling and seizure control. For the general content structure, we plan to start from a modeler's
perspective and then progressively move towards more data-driven approaches.

The first session will explore seizure mechanisms through biophysical and neural mass models at different
temporal and spatial scales, investigating, among others, ionic dynamics and network plasticity. It aims to
understand seizure initiation, progression, and duration.

The second session will focus on the application of computational models to EEG data recorded in epileptic
patients. First, it will discuss advanced parameter inference methods to tailor models to individual data
samples to provide mechanistic insight. It then moves on to issues of seizure monitoring using wearable
devices and long-term EEG recordings, and in particular the use of data features inspired by concepts
derived from mathematical modeling in epilepsy.

The third session will examine stimulation-based strategies to terminate or prevent seizures. There will
be a focus on recent advancements in closed-loop and low-frequency electrical stimulation to control
seizures. On top of model-based approaches, this session will also include the clinical perspective on
stimulation treatment and data-driven studies.
Speakers
HS

Helmut Schmidt

Scientific researcher, Institute of Computer Science, Czech Academy of Sciences
JH

Jaroslav Hlinka

Senior researcher, Institute of Computer Science of the Czech Academy of Sciences
Currently                                I am leading the COBRA working group and also serve as the Head of the Department of Complex Systems and as the Chair of the Council of the Institute of Computer Science of the Czech Academy of Sciences.Brief bio After obtaining master degrees in Psychology from Charles University (2005) and in Mathematics from Czech Technical University (2006), I went on the quest of applying mathematics in helping to understand the complex activity of human bra... Read More →
GG

Guillaume Girier

Postdoc, INSTITUTE OF COMPUTER SCIENCE The Czech Academy of Sciences
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Room 202

09:00 CEST

Population activity: the influence of cell-class identity, synaptic dynamics, plasticity and adaptation
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Title: Population activity : the influence of cell-class identity, synaptic dynamics, plasticity and adaptation.

Organizers: 
Michele GIUGLIANO (co-organizer)
Università degli Studi di Modena e Reggio Emilia - Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze sede ex-Sc. Biomediche - Italy
michele.giugliano@unimore.it

Simona OLMI (co-organizer)
Institute for Complex Systems - National Research Council - Italy
simona.olmi@fi.isc.cnr.it

Alessandro TORCINI (co-organizer)
Laboratoire de Physique Théorique et Modélisation - CY Cergy Paris Université- Cergy-Pontoise, France
alessandro.torcini@cyu.fr

Abstract:
In recent years tremendous developments have been achieved in the comprehension of neural activity at the population level. This has been possible on one side thanks to the new investigation methods recently developed (e.g. the neuropixels probes and large-scale imaging) that allows for the contemporary registration of the activity of (tens/hundreds of) thousands of neurons in alive and behaving mice as well as established dynamic-clamp protocols.
On the other side by the elaboration of extremely refined mean field models able to describe the population activity of spiking neural networks encompassing realistic biological features, from different forms of synaptic dynamics to plastic and adaptive aspects present at the neural level.
The aim of this workshop is to gather neuroscientists, mathematicians, engineers, and physicists all working on the characterization of the population activity from different point of views, ranging from data analysis of experimental results to simulations of large ensembles of neurons, from next generation neural mass models to dynamical mean field theories. This workshop will favour the exchanges and the discussion on extremely recent developments in this extremely fluorishing field.

Key Words : Neuropixels probes; neural mass models; Fokker Planck formulation; dynamical mean field theory; short- term and long-term plasticity; excitatory and inhibitory balanced networks; spike frequency adaptation

Program:
July 8th -- Room 4
9:15-9:30 Opening
9:30-10:00 Anna Levina (University of Tübingen, Germany)
Talk title: "Balancing Excitation and Inhibition in connectivity and synaptic strength"
10:00-10:30 Giacomo Barzon (Padova Neuroscience Center, University of Padova, Italy)
Talk title: "Optimal control of neural activity in circuits with excitatory-inhibitory balance"

10:30-11:00 Coffee break

11:00-11:30 Eleonora Russo (Scuola Superiore Sant'Anna, The BioRobotics Institute, Italy)
Talk title “Integration of rate and phase codes by hippocampal cell-assemblies supports flexible encoding of spatiotemporal context”
11:30-12:00 Tobias Kühn (University of Bern, Switzerland)
Talk title: "Discrete and continuous neuron models united in field theory: statistics, dynamics and computation"
12:00-12:30 Gianluigi Mongillo (Sorbonne Université, INSERM, CNRS, Institut de la Vision, F-75012 Paris, France)
Talk title: “Synaptic encoding of time in working memory”

July 9th -- Room Hall 1A
9:30 - 10:00 Magnus J.E. Richardson (Warwick Mathematics Institute, UK)
Talk title: "Spatiotemporal integration of stochastic synaptic drive within neurons and across networks"
10:00-10:30 Gianni Valerio Vinci (Istituto Superiore di Sanita’, Rome, Italy)
Talk title: "Noise induced phase transition in cortical neural field: the role of finite-size fluctuations"

10:30-11:00 Coffee break

11:00-11:30 Simona Olmi (Institute for Complex Systems - National Research Council - Italy)
Talk title: “Relaxation oscillations in next-generation neural masses with spike-frequency adaptation”
11:30-12:00 Ferdinand Tixidre (CY Cergy Paris University, France)
Talk title: "Is the cortical dynamics ergodic? A numerical study in partially-symmetric networks of spiking neurons"
12:00-12:30 Letizia Allegra Mascaro (Neuroscience Institute, National Research Council, Italy)
Talk title: "State-Dependent Large-Scale Cortical Dynamics in Neurotypical and Autistic Mice"

12:30-14:00 Lunch break

14:00-14:30 Alessandro Torcini (CY Cergy Paris Université- Cergy-Pontoise, France)
Talk title : “Discrete synaptic events induce global oscillations in balanced neural networks"
14:30-15:00 Rainer Engelken (Columbia University, NY, United States)
Talk title:"Sparse Chaos in Cortical Circuits: Linking Single-Neuron Biophysics to Population Dynamics"
15:00-15:30 Tilo Schwalger (Technische Universität Berlin, Institut für Mathematik, Germany)
Talk title: "A low-dimensional neural-mass model for population activities capturing fluctuations, refractoriness and adaptation"

15:30-16:00 Coffee break

16:00-16:30 Giancarlo La Camera (Stony Brook University, NY, United States)
Talk title: “Prefrontal population activity during strategic behavior in context-dependent tasks”
16:30-17:00 Gorka Zamora-López (Universitat Pompeu Fabra, Barcelona, Spain)
Talk title: "Emergence and maintenance of modular hierarchy in neural networks driven by external stimuli"
17:00-17:30 Sacha van Albada (Research Center Juelich and University of Cologne, Germany)
Talk title: "Determinants of population activity in full-density spiking models of cerebral cortex"
Speakers
avatar for Alessandro TORCINI

Alessandro TORCINI

Professor, CY Cergy Paris Universite'
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Room 4

09:00 CEST

Workshop on Methods of Information Theory in Computational Neuroscience
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Workshop website: https://kgatica.github.io/CNS2025-InfoTeory-W.io/

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience. A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

This is the 20th iteration of this workshop at CNS --join us to celebrate!
Speakers
avatar for Joseph T. Lizier

Joseph T. Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →
avatar for Abdullah Makkeh

Abdullah Makkeh

Postdoc, University of Goettingen
My research is mainly driven by the aim of enhancing the capability of information theory in studying complex systems. Currently, I'm focusing on introducing novel approaches to recently established areas of information theory such as partial information decomposition (PID). My work... Read More →
avatar for Marilyn Gatica

Marilyn Gatica

Postdoctoral Research Assistant, Northeastern University London
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Room 203

10:30 CEST

Coffee break
Tuesday July 8, 2025 10:30 - 11:00 CEST
Tuesday July 8, 2025 10:30 - 11:00 CEST

10:40 CEST

12:30 CEST

Lunch break
Tuesday July 8, 2025 12:30 - 14:00 CEST
Tuesday July 8, 2025 12:30 - 14:00 CEST

14:00 CEST

Keynote #4: Maurizio Mattia
Tuesday July 8, 2025 14:00 - 15:20 CEST
Speakers
Tuesday July 8, 2025 14:00 - 15:20 CEST
Auditorium - Plenary Room

15:20 CEST

Conference photo
Tuesday July 8, 2025 15:20 - 15:30 CEST
Tuesday July 8, 2025 15:20 - 15:30 CEST
TBA

15:30 CEST

Coffee break
Tuesday July 8, 2025 15:30 - 16:00 CEST
Tuesday July 8, 2025 15:30 - 16:00 CEST

16:00 CEST

Member's meeting
Tuesday July 8, 2025 16:00 - 17:00 CEST
Tuesday July 8, 2025 16:00 - 17:00 CEST
Auditorium - Plenary Room

17:00 CEST

Poster session 3
Tuesday July 8, 2025 17:00 - 19:00 CEST
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P224: Four-compartment model of dopamine dynamics at the nigrostriatal synaptic site
Tuesday July 8, 2025 17:00 - 19:00 CEST
P224 Four-compartment model of dopamine dynamics at the nigrostriatal synaptic site

Alex G. O'Hare*1, 2, Catalina Vich1, 2, Jonathan E. Rubin3, 4, Timothy Verstynen3, 5

1Dept. de Matemátiques i Informática, Universitat de les Illes Balears, Palma, Illes Balears, Spain
2Institute of Applied Computing and Community Code, Palma, Illes Balears, Spain
3Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
4Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
5Department of Psychology & Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America

*Email: alex-gwyn.o-hare@uib.cat
Introduction

The traditional model of dopamine (DA) dynamics [1] posits that the level of extrasynaptic (tonic) DA modulates the effect of phasic burst firing which occurs in the event of a reward [2]. It is supposed that tonic DA, although present in low concentrations, has the capacity to activate DA synthesis and release modulating autoreceptors of the pre-synaptic DA neuron. Taking into account this traditional model as well as recent findings which demonstrate the capacity of tonic DA to also affect both D1 and D2 postsynaptic receptors [3], we develop a biologically realistic, yet computationally efficient 4-compartment model (see Fig.1) of DA action at the synaptic site, to elucidate the impact of DA dynamics on receptor occupancy and tonic DA levels.
Methods
DA is synthesised in the terminal of the presynaptic substantia nigra pars compacta (SNc) neuron, DAter, and released into the synaptic cleft, DAsyn at a rate dependent on DAter and the membrane voltage of the SNc neuron. From the synaptic cleft, DA is occupied by D1 or D2 receptors, the quantity of which is occupied we consider as the third compartment, DAocc. DAocc impacts the excitability and plasticity of postsynaptic spiny projection neuron (SPN) which receives inputs from a cortical neuron. DA is removed from DAsyn by reuptake into DAter and via diffusion to the extrasynaptic space, DAext. DA in DAsyn affects release via autoreceptors. DA in DAext affects synthesis autoreceptors and is removed from the system via diffusion.
Results
Preliminary symbolic analysis of the system in a supposed quasi-steady-state determined by setting the rate of synthesis equal to the rate of removal from DAext and letting the firing rate of the presynaptic neuron be constant, reveals a stable system according to the Routh-Hurwitz criterion, with either damped or no oscillations. Building on prior work [4] in which we developed an STDP model for cortico-striatal plasticity, we incorporate a presynaptic SNc neuron to analyse the effect of variations in parameters (limited within ranges of empirical data and using latin hypercube sampling) of the DA model on plasticity.
Discussion
Our four-compartment model of nigrostriatal dopamine dynamics bridges the gap between purely phenomenoligcal models which lack biological realism and more complex models which take into account a high degree of biological detail and are computationally expensive, thereby providing a solution for incorporating the effect of DA on corticostriatal plasticity in large scale spiking neural networks. In particular, our model may be of utility for simulations of dopaminergic reinforcement learning, such as in n-choice tasks, and simulations of DA-related pathologies which require explicit consideration of postsynaptic receptor occupation and extrasynaptic DA levels.



Figure 1. 4-compartment model of dopamine (DA) at the synaptic site. DAter: presynaptic terminal, DAsyn: synaptic cleft, DAocc: occupied postsynaptic receptors, DAext: extrasynaptic space. Pointed arrows indicate the transfer of DA from one compartment to another, with constants ki indicating the rate of transfer. Dotted arrows denote the modulatory effect of synthesis and release modulating autoreceptors.
Acknowledgements.
References
1. https://doi.org/10.1016/0376-8716(94)01066-t
2. https://doi.org/10.1126/science.275.5306.1593
3. https://doi.org/10.1523/jneurosci.1951-19.2019
4. https://doi.org/10.1016/j.cnsns.2019.105048
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P225: Time-to-first-spike encoding in layered networks evokes label-specific synfire chain activity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P225 Time-to-first-spike encoding in layered networks evokes label-specific synfire chain activity

Jonas Oberste-Frielinghaus1,2, Anno C. Kurth1, Julian Göltz3,4, Laura Kriener5,4,Junji Ito*1, Mihai A. Petrovici4, Sonja Grün1,6,7


1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
4Department of Physiology, University of Bern, Bern, Switzerland
5Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
6JARA Brain Institute I (INM-10), Jülich Research Centre, Jülich, Germany
6Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany


*Email: j.ito@fz-juelich.de
Introduction

While artificial neural networks (ANNs) have achieved remarkable success in various tasks, they lack two major characteristic features of biological neural networks: spiking activity and operation in continuous time. This makes it difficult to leverage knowledge about ANNs to gain insights into the computational principles of the real brains. However, training methods for spiking neural networks (SNNs) have recently been developed to create functional SNN models [1]. In this study we analyze the activity of a multilayer feedforward SNN trained for image classification and uncover the structures in both connectivity and dynamics that underlie its functional performance.

Methods
Our network is composed of an input layer (784 neurons), 4 hidden layers (300 excitatory and 100 inhibitory neurons in each layer), and an output layer (10 neurons). We trained it with backpropagation to classify the MNIST dataset, based on time-to-first-spike coding: each neuron encodes information in the timing of its first spike; the first neuron to spike in the output layer defines the inferred input image class [1]. The MNIST input is also provided as spike timing: dark pixels spike early, lighter pixels later. Based on the connection weights after training, neurons that have strong excitatory effects on each of the output neurons are identified in each layer. Note that one neuron can have strong effects on multiple output neurons.
Results
In response to a sample, the input layer generates a volley of spikes, identified as a pulse packet (PP) [2], which propagates through the hidden layers (Fig. 1). In deeper layers, spikes in a PP get more synchronized and the neurons providing spikes to the PP become more specific to the sample label. This leads to a characteristic sparse representation of the sample label in deep layers. The analysis of connection weights reveals that a correct classification is achieved by propagating spikes through a specific pathway across layers, composed of neurons with strong excitatory effects on the correct output neuron. Pathways for different output neurons become more separate in deeper layers, with less overlap of neurons between pathways.
Discussion
The revealed connectivity structure and the propagation of spikes as a PP agree with the notion of the synfire chain (SFC) [3,4]. To our knowledge, this is the first example of SFC formation by training of a functional network. In our network, multiple parallel SFCs emerge through the training for MNIST classification, representing each input label by activation of one particular SFC. Such a representation naturally leads to sparser encoding of the input label in deeper layers, and also increases the linear separability of layer-wise activity. Thus, the use of SFCs for information representation can have multiple advantages for achieving efficient computation, besides the stable transmission of information through the network.




Figure 1. Network activity in response to six different samples. Dots represent spike times of individual neurons, with colors indicating the luminance of the corresponding pixels in the sample (“input” layer), or spikes of excitatory (red) and inhibitory (blue) neurons (layers 1-4). The first neurons to spike in the “output” layer are indicated by numbers next to the spikes.
Acknowledgements
This research was funded by the European Union’s Horizon 2020 Framework programme for Research and Innovation under Specific Grant Agreements No. 785907 (HBP SGA2), No. 945539 (HBP SGA3) and No. 101147319 (EBRAINS 2.0), the NRW-network 'iBehave' (NW21-049), the Helmholtz Joint Lab SMHB, and the Manfred Stärk Foundation.

References
● Göltz et al. (2021). Fast and energy-efficient neuromorphic deep learning with first-spike times. Nature Machine Intelligence, 3(9), 823–835. https://doi.org/10.1038/s42256-021-00388-x
● Diesmann, Gewaltig, & Aertsen (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402(6761), 529–533. https://doi.org/10.1038/990101
● Abeles (1982). Local Cortical Circuits: An Electrophysiological Study. Springer-Verlag.
● Abeles (1991). Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P226: Astrocyte modulation of neural oscillations: mechanisms underlying slow wave activity in cortical networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P226 Astrocyte modulation of neural oscillations: mechanisms underlying slow wave activity in cortical networks

Thiago Ohno Bezerra*1, Antonio C. Roque1

1Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, São Paulo, Brazil.

*Email: thiagotakechi@usp.br

Introduction
Oscillatory activity plays a pivotal role in neural networks. Astrocytes have recently been shown to modulate neural activity through the release of glutamate and ATP, the latter acting as an inhibitory neuromodulator, and have been implicated in the regulation of slow cortical oscillations, specifically the up and down states observed in these networks. However, the mechanisms by which astrocytes influence neural oscillations and shape network activity remain poorly understood.


Methods
We extended the INEXA model [1] to incorporate the adaptation of neural activity. Neurons (N = 250) and astrocytes (N = 75, 30% of neurons) are randomly distributed in a 3D volume (750 × 750 × 10 µm3). Each neuron is modeled as a stochastic unit, where its spiking probability depends on neural excitatory and inhibitory inputs, ATP-mediated inhibition from astrocytes, an adaptive variable (u), and background noise (c = 0.03). The variable u increases after each neuronal spike and decays over time. Presynaptic neuron activity enhances IP3concentration in astrocytes, which elevates local Ca2+levels. Astrocyte activity is modeled as a stochastic process, driven by local Ca2+responses and the activation of neighboring astrocytes. Glutamate release from astrocytes promotes synaptic facilitation, influencing neuron-to-neuron communication. Connectivity between neurons and astrocytes is governed by a probabilistic rule based on spatial proximity.


Results
The model predicts that without astrocytes, neural networks oscillate at frequencies that vary according to the increment and decay rates of the variable u. These oscillations show no slow-wave patterns. In contrast, when astrocytes are included, the network exhibits three distinct activity modes: (1) high-frequency asynchronous spiking, (2) alternating between high-frequency spiking and silent states, and (3) regular synchronous spiking. The second mode, characterized by alternating states, is particularly reminiscent of cortical up and down states associated with slow oscillations. The specific mode of activity is influenced by the dynamics of the adaptive variable u, which modulates the frequency and pattern of oscillations. Astrocytic synaptic potentiation, ATP-mediated inhibition, and astrocyte activation duration also regulate the slow oscillation frequency.


Discussion
Our results suggest that astrocytes play an integral role in modulating the activity patterns of neural networks. Through the release of glutamate and ATP, astrocytes influence both excitatory and inhibitory processes, thereby altering network dynamics. These findings support the hypothesis that astrocytes are essential for the generation and regulation of slow oscillations in cortical networks, specifically in the context of up and down states. The modulation of these oscillations by astrocytic activity may provide a mechanism through which astrocytes influence cognitive processes associated with neural synchrony.



Acknowledgements
This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). TOB is supported by a FAPESP PhD scholarship (grant 2021/12832-7, BEPE: 2024/14422-9). ACR is partially supported by a CNPq fellowship (grant 303359/2022-6).
References
[1] Lenk, K., Satuvuori, E., Lallouette, J., Ladrón-de-Guevara, A., Berry, H., & Hyttinen, J. A. (2020). A computational model of interactions between neuronal and astrocytic networks: The role of astrocytes in the stability of the neuronal firing rate. Frontiers in Computational Neuroscience, 13, 92. https://doi.org/10.3389/fncom.2019.00092
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P227: Only a matter of time: developmental heterochronicity captures network properties of the human connectome
Tuesday July 8, 2025 17:00 - 19:00 CEST
P227 Only a matter of time: developmental heterochronicity captures network properties of the human connectome

Stuart Oldham*1,Francesco Poli2, Duncan Astle2, Gareth Ball1


1 Murdoch Children’s Research Institute, Melbourne, Australia
2 Cambridge University, Cambridge, UK


*Email: stuart.oldham@mcri.edu.au


Brain network organization is shaped by a trade-off between connection costs and functional benefits [1]. Computational generative network models have found this trade-off explains many, but not all, network properties [2,3]. During gestation, brain development proceeds according to spatiotemporal patterns defined by morphogen gradients [4]. Cortical areas display heterochronicity, differential timing of key developmental events, which induces spatial patterns that persist in later life as smoothly varying gradients in cytoarchitecture, neuronal connectivity, and functional activation [4,5]. Therefore, we developed a new computational model to assess how heterochronicity may constrain the formation of cortical connectivity.
Developmental timing was modeled along a unimodal gradient, originating from one node per hemisphere (Fig. 1A). Nodes were sequentially 'activated' over model timesteps based on their geodesic distance from the origin, with the timing/heterochronicity of activation controlled by the parameter τ, and the connection probability between active nodes governed by their wiring-cost η (Fig. 1B-C). The summed probabilities across timesteps used to generate a density matched network (Fig. 1C). The model was run for each origin, optimizing parameters to maximize model fit, which was the degree correlation ρ with empirical network (a group consensus structural connectivity brain network [2]), a feature generative models struggle to capture [2,3].
Spatial gradients modeling heterochronicity from the frontal cortex yielded the highest degree correlations (max ρ=0.39). These same networks with the best degree correlation also captured key empirical topological features, including clustering, connection length, and primary connectivity gradient. However, they did not fully replicate modularity or connection overlap (Fig. 1D). Models from the best origins (ρ>0.25) outperformed previous leading approaches[2,3] (Fig. 1E) and achieved the best fits with strong heterochronicity (τ > 0.5) and minimal distance penalties (η ≈ 0; Fig. 1F). Models using only the heterochronicity term produced similar degree correlations (Fig. 1G), suggesting it alone can drive brain-like connectivity patterns.
Here we demonstrate that constraining network connections to form along an anterior-posterior gradient is sufficient to capture topographical and topological connectomic features of empirical brain networks. These models also outperform past approaches [2,3]. The best-performing models imposed a heterochronous gradient that aligned with the rostral-caudal axis, a known major neurodevelopmental gradient[4,5], suggesting that early spatiotemporal patterning along this axis is key to shaping cortical connectivity. While our study examined single unimodal gradients, future studies could integrate multiple biologically informed gradients to better model network complexity. Our framework offers a flexible foundation for such extended work.




Figure 1. Fig. 1 (A) Geodesic distances from example origin (B) Heterochronicity/wiring-cost calculation (C) Model connection probability and network generation (D) Similarity to the empirical data on network features for each origin’s best fitting model (E) Comparison to previous models[2] (F) τ and η for each origin’s best fitting model (G) Best degree correlations for heterochronous-only models
Acknowledgements
S.O is supported by the Brain and Behavior Research Foundation (ID: 31471). G.B. was supported by the National Health and Medical Research Council (ID: 1194497). Research was supported by the Murdoch Children’s Research Institute, the Royal Children’s Hospital, Department of Paediatrics, The University of Melbourne and the Victorian Government’s Operational Infrastructure Support Program.
References
1.https://doi.org/10.1038/nrn3214
2.https://doi.org/10.1101/2024.11.18.624192
3.https://doi.org/10.1126/sciadv.abm6127
4.https://doi.org/10.1016/j.neuron.2007.10.010

5.https://doi.org/10.1016/j.tics.2017.11.002
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P228: Multi-scale Spiking Network Model of Human Cerebral Cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P228 Multi-scale Spiking Network Model of Human Cerebral Cortex

Renan O. Shimoura*1, Jari Pronold1,2, Alexander van Meegen1,3, Mario Senden4,5, Claus C. Hilgetag6, Rembrandt Bakker1,7, Sacha J. van Albada1,3



1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Institute of Zoology, University of Cologne, Cologne, Germany
4Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
5Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Centre, Maastricht University, Maastricht, The Netherlands
6Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
7Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands

*Email: r.shimoura@fz-juelich.de
Introduction

Data-driven models at cellular resolution have been built for various brain regions, yet few exist for the human cortex. We present a comprehensive point-neuron network model of a human cortical hemisphere integrating diverse experimental data into a unified framework bridging cellular and network scales [1]. Our approach builds on a large-scale spiking network model of macaque cortex [2,3] and investigates how resting-state activity emerges in cortical networks.

Methods
We constructed a spiking network model representing one hemisphere using the Desikan-Killiany parcellation (34 areas), with each area implemented as a 1 mm² microcircuit distinguishing the cortical layers. The model aggregates data across multiple modalities, including electron microscopy for synapse density, cytoarchitecture from the von Economo atlas [4], DTI-based connectivity [5], and local connection probabilities from the Potjans-Diesmann microcircuit [6]. Human neuron morphologies [7] inform the layer-specific inter-area connectivity. The full-density model, based on leaky integrate-and-fire neurons, comprises 3.47 million neurons with 42.8 billion synapses and was simulated using the NEST simulator on the JURECA-DC supercomputer.

Results
When local and inter-area synapses have the same strength, model simulations show asynchronous irregular activity deviating from experiments in terms of spiking activity and inter-area functional connectivity. When inter-areal connections are strengthened relative to local synapses, the model reproduces both microscopic spiking statistics from human medial frontal cortex and macroscopic resting-state fMRI correlations [8]. Analysis reveals that single-spike perturbations influence network-wide activity within 50-75 ms. The ongoing activity flows primarily from parietal through occipital and temporal to frontal areas, consistent with empirical findings during visual imagery [9].

Discussion
This open-source model integrates human data across scales to investigate cortical organization and dynamics. By preserving neuron and synapse densities, it accounts for the majority of the inputs to the modeled neurons, enhancing the self-consistency compared to downscaled models. The model allows systematic study of structure-dynamics relationships and forms a platform for investigating theories of cortical function. Future work may leverage the Julich-Brain Atlas to refine the parcellation and incorporate detailed cytoarchitectural and receptor distribution data [10]. The model code is publicly available athttps://github.com/INM-6/human-multi-area-model.




Acknowledgements
This work was supported by the German Research Foundation (DFG) Priority Program "Computational Connectomics" (SPP 2041; Project 347572269), the EU Grant 945539 (HBP), EBRAINS 2.0 Project (101147319), the Joint lab SMHB, and HiRSE_PS. The use of the JURECA-DC supercomputer in Jülich was made possible through VSR computation grant JINB33. Open access publication funded by DGF Grant 491111487.
References
[1] https://doi.org/10.1093/CERCOR/BHAE409.
[2] https://doi.org/10.1007/s00429-017-1554-4.
[3] https://doi.org/10.1371/journal.pcbi.1006359.
[4] https://doi.org/10.1159/isbn.978-3-8055-9062-4.
[5]https://doi.org/10.1016/J.NEUROIMAGE.2013.05.041.
[6] https://doi.org/10.1093/cercor/bhs358.
[7] https://doi.org/10.1093/CERCOR/BHV188.
[8] https://doi.org/10.1126/science.aba3313.
[9] https://doi.org/10.1016/J.NEUROIMAGE.2014.05.081.
[10] https://doi.org/10.3389/fnana.2017.00078.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P229: Exploring Electroencephalographic (EEG) Models of Brain Activity using Automated Modelling Techniques
Tuesday July 8, 2025 17:00 - 19:00 CEST
P229 Exploring Electroencephalographic (EEG) Models of Brain Activity using Automated Modelling Techniques

Nina Omejc*1, 2, Sabin Roman1, Ljupčo Todorovski1,3, Sašo Džeroski1

1Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
2Jozef Stefan International Postgraduate School, Ljubljana, Slovenia
3Department of Mathematics, Faculty of Mathematics and Physics, Ljubljana, Slovenia

*Email: nina.omejc@ijs.si
Introduction

Electroencephalography (EEG) is a clinical, non-invasive, high-temporal resolution technique for measuring whole-brain activity. However, the underlying mechanisms that give rise to the observed high-level rhythmic activity remain incompletely understood. Various neural population and network models attempt to explain these dynamics [1], but, to our knowledge, they have not been systematically explored or evaluated.
Methods
To explore the space of proposed and potential models, we represent brain networks as graphs, where nodes correspond to brain sources obtained via EEG source analysis, in our case the dipole fitted independent components (Figure 1). Each node’s dynamics are further categorized into three subdynamics: synapto-dendritic dynamics (input transformation), intrinsic dynamic, and firing response (output transformation). These subdynamics are defined by a bounded set of functions derived from the literature [1], or generated by an unbounded probabilistic context-free grammar [2]. Such a modular and unbounded specification allows for flexible and physiologically valid construction of the network.
Results
We are currently utilizing our Julia-based framework and are in the model evaluation phase. The dataset we use consists of 64-channel EEG recordings from 50 participants performing a visual flickering task, designed to induce steady-state visual evoked potentials [3]. We repeatedly sample potential EEG models using Markov Chain Monte Carlo and optimize the model parameters using CMA-ES algorithm. By the time of the conference, we aim to determine which established and previously unexamined whole-brain activity models can reproduce the observed oscillations, and, more importantly, which can also accurately capture the harmonics of the flickering stimulation frequency, a robust and interesting feature observed in this dataset.
Discussion
The presence of these harmonic components is a well-documented but not yet fully understood phenomenon in EEG research [4]. By systematically exploring different model configurations, we aim to assess which types of nonlinear models and which features (for example, recurrent connectivity, nonlinear synaptic integration, parallel computations, delays) play a crucial role in shaping these spectral patterns. Exploring the set of valid models to understand these mechanisms could have broader implications for theories of whole-brain neural activity and improve our understanding of EEG measurements.



Figure 1. Figure 1: A data-driven framework for exploring whole-brain network EEG models.
Acknowledgements
We would like to thank our department's SHED group for equation discovery for the fruitful discussions regarding our work.
References

[1]https://doi.org/10.1007/978-3-030-89439-9_13
[2]https://doi.org/10.1007/s10994-024-06522-1
[3]https://doi.org/10.1093/gigascience/giz002
[4]https://doi.org/10.1016/j.neuroimage.2012.05.054


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P230: Driver Nodes for Efficient Activity Propagation Between Clusters in Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P230 Driver Nodes for Efficient Activity Propagation Between Clusters in Spiking Neural Networks

Bulat Batuev1+,Arsenii Onuchin2,3+, Sergey Sukhov1


1Kotelnikov Institute of Radioengineering and Electronics of Russian Academy of Sciences, Moscow, Russia
2Skolkovo Institute of Science and Technology, Moscow, Russia
3Laboratory of Complex Networks, Center for Neurophysics and Neuromorphic Technologies, Moscow, Russia


+ These authors contributed equally

Email: arseniyonuchin04.09.97@gmail.com

Introduction

Synchronous neural activity is critical for brain function, yet the connectome's role in enabling synchronization remains unclear. We explore strategies to achieve widespread synchronization in spiking stochastic block model (SBM) networks with minimal control inputs. This work builds on research into neural network control [1], focusing on identifying driver nodes that influence dynamics. By evaluating centrality measures (betweenness, degree, eigenvector, closeness, harmonic, percolation), we pinpoint topological features predicting effective driver nodes. Furthermore, we analyze connectivity patterns to understand pairwise activity relationships and uncover mechanisms of network-wide coordination.

Methods
The spiking neural network consisted of 500 leaky integrate-and-fire neurons (80% excitatory, 20% inhibitory) divided into two clusters of equal size, with intra-cluster edge probability 0.15 and inter-cluster varying from 0.06 to 0.13. To simulate background activity, all neurons received independent Poisson-distributed inputs. Within the first cluster, a subpopulation of neurons (10–20%) was designated as driver neurons and subjected to an additional external current stimulus (10 Hz, 1000 pA). Driver neurons were selected either randomly or by centrality metrics (betweenness, degree, eigenvector, closeness, harmonic, percolation) [2]. Neural dynamics were simulated for 5 seconds to achieve steady-state activity using the Brian 2 [3].
Results
The population activity in the non-stimulated cluster was analyzed as a function of the number of driver neurons and the inter-cluster connectivity. When driver neurons were selected using closeness and betweenness centrality metrics, spike rates in the second cluster increased approximately 10-fold compared to random selection, accompanied by synchronization with the first cluster at nearly 10 Hz. In contrast, selecting driver neurons based on degree and percolation centrality metrics resulted only in 5-fold increase compared to random selection (Fig. 1).

Discussion
Synchronization between two weakly coupled clusters can be achieved by selectively stimulating specific neurons within the first cluster. However, it remains unclear why closeness and betweenness centrality outperform other centralities in promoting synchronization. Future research could focus on extending our method to multicluster heterogeneous systems. While the two-cluster model offers a controlled setting, expanding it could provide deeper insights into real brain connectomes. In conclusion, this study elucidates how topology and driver node selection shape neural synchronization, with potential applications in neuromodulation and brain-inspired systems.




Figure 1. The average population activity within the second cluster, calculated over a 1-second time window, is depicted for driver nodes selected based on various centrality measures (degree, betweenness, eigenvector centrality, PageRank, and percolation centrality) for the upper surface, and for nodes chosen at random for the lower surface.
Acknowledgements

This work was funded by the Russian Science Foundation (project number 24-21-00470).
References
1.Bayati, M., Valizadeh, A., Abbassian, A., & Cheng, S. (2015). Self-organization of synchronous activity propagation in neuronal networks driven by local excitation.Frontiers in Computational Neuroscience, 9, 69. https://doi.org/10.3389/fncom.2015.00069
2.Saxena, A., & Iyengar, S. (2020). Centrality measures in complex networks: A survey. arXiv preprint arXiv:2011.07190.
3.Stimberg, M., Brette, R., & Goodman, D. F. (2019). Brian 2, an intuitive and efficient neural simulator.eLife, 8, e47314. https://doi.org/10.7554/eLife.47314
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P231: Emotional network modelling: whole brain simulations of fear conditioning in humans
Tuesday July 8, 2025 17:00 - 19:00 CEST
P231 Emotional network modelling: whole brain simulations of fear conditioning in humans


Dianela A Osorio-Becerra1, Andrea Fusari1, Ashika Roy2, Danilo Benozzo1, Andreas Frick2, Egidio D’Angelo1, Fulvia Palesi1,Claudia Casellato1
1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
2Department of Medical Sciences, Uppsala University, Uppsala, Sweden
p { line-height: 115%; text-align: left; orphans: 2; widows: 2; margin-bottom: 0.1in; direction: ltr; background: transparent }a:link { color: #467886; text-decoration: underline }

*Email: claudia.casellato@unipv.it
Introduction

Emotion in mammals involves complex brain networks [1,2], for which it is critical to identify the specific regional connectivity and microcircuit properties. We couple in-silico brain dynamics in whole-brain simulations by The Virtual Brain – TVB [3] with experimental fear conditioning data in humans. These include MRI (DWI, resting-state and task-dependent fMRI) and fear-related behavioural measurements (skin conductance responses (SCR), a biomarker of emotional arousal [4]). This work represents a preliminary exploration on how data-driven subject-specific models of brain dynamics could predict emotional behaviour.
Methods
Data come from 17 healthy subjects. The fMRI is acquired at TR 3 s. The mean SCR is extracted for CS+ (conditioned stimulus paired with the unconditioned stimulus, US) and CS− (neutral stimulus), both during acquisition and extinction (acq_csp, acq_csm, ext_csp, ext_csm), and for US (electric shock) during acquisition (acq_us). The SCR usually increases with the paired CS/US pattern presentation, and it decreases when CS is no longer followed by US (extinction).
In the model, a fear-TVB network was defined selecting the regions involved, with each node represented by a reduced Wong-Wang model [5] and using the subject-specific structural connectivity from DWI. Then, the fear-TVB network was optimized in terms of global and local connection parameters, by maximizing the match between the subject-specific experimental functional connectivity matrices (static and dynamic – expFC and expFCD), obtained from resting-state fMRI, and the simulated ones (simFC and simFCD). Finally, these parameters were correlated with the subject-specific SCRs.
Results
The fear-TVB network was reconstructed using 88 nodes, including the amygdala, cerebellum, periaqueductal gray, and parts of the limbic system. The TVB parameters - i.e. global couplingGand three synaptic parameters (excitatory NMDA strengthJNMDA, inhibitory GABA strengthJi, recurrent excitationw+) - were extracted during the optimization process, see Fig.1. By correlating TVB parameters with SCR measures, a positive correlation betweenGandacq_us(ρ=0.37)emerged. However, higher correlation was found when considering sex separately, reinforcing the existing literature on this field.

Discussion
These findings suggest that individual differences in resting-state neural dynamics influence fear acquisition, with distinct mechanisms supporting US processing and conditioned fear discrimination. Although it is the first time that a correlation between network dynamics and fear responses is revealed, the relationship between global connectivity strength and fear responses is still weak, more data and closer understanding of the underlying network is needed. The next step is to use the fMRI data along fear conditioning trials by defining a time-dependent subject-specific TVB parameter space, which may correlate with the corresponding time-dependent fear responses.



Figure 1. a)Fear network b)Anterior and posterior views, 88 nodes (frontal, prefrontal, limbic, parietal, temporal, occipital, deep ganglia, brainstem and cerebellum) c)Violin plots of fear-measures and TVB parameters d)Exp and sim FC matrices, mean across subjects, each element is the Pearson Correlation Coefficient e)Exp vs sim: PCC for FC and the Kolmogorov–Smirnov distance for FCD, one point for subject
Acknowledgements
European Union's Horizon 2020 research under the Marie Sklodowska-Curie grant agreement No. 956414 for "Cerebellum and Emotional Networks", and #NEXTGENERATIONEU, by the Ministry of University and Research, National Recovery and Resilience Plan, project MNESYS (PE0000006)-A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022).
References
[1]https://doi.org/10.3389/fnsys.2023.1185752
[2]https://doi.org/10.1146/annurev.neuro.23.1.155
[3]https://doi.org/10.3389/fninf.2013.00010
[4]https://doi.org/10.1177/1094428116681073[5]https://doi.org/10.1523/JNEUROSCI.5068-13.201


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P232: Centralized brain networks underlie grooming body part coordination
Tuesday July 8, 2025 17:00 - 19:00 CEST
P232 Centralized brain networks underlie grooming body part coordination

Pembe Gizem Ozdil*1,2,Clara Scherrer1, Jonathan Arreguit2, Auke Ijspeert2, Pavan Ramdya1

1Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
2Biorobotics Laboratory, Institute of Bioengineering, EPFL, Lausanne, Switzerland

*Email: pembe.ozdil@epfl.ch
Introduction

Animals must coordinate multiple body parts to perform essential tasks such as locomotion and grooming. While locomotor coordination has been extensively studied [1,2], less is known about how the nervous system synchronizes movements across distant body parts, such as the head and legs, to execute complex behaviors. Antennal grooming inDrosophila melanogasterprovides a powerful model to study such coordination, as flies exhibit a rich repertoire of precisely controlled limb movements. With a compact yet fully mapped nervous system,Drosophilaenables circuit-level insights into how neural networks integrate motor commands for efficient multi-limb control.

Methods

Here, we combined behavioral analyses, biomechanical modeling, and connectome-based neural circuit simulations to investigate how flies coordinate head, antennae, and forelegs during grooming. We tracked detailed movement kinematics in freely behaving flies using 3D pose estimation[3]. To understand the functional role of coordination, recorded movements were replayed in a biomechanical simulation (NeuroMechFly [4]) to measure contact forces. To test proprioceptive contributions, we performed limb amputations and head immobilizations. Lastly, we analyzed the antennal grooming network using graph-based and computational neural network simulations derived from the brain connectome [5].

Results

Flies exhibit two main grooming strategies, unilateral and bilateral antennal grooming, each requiring precise coordination of head, antennae, and forelegs. Biomechanical simulations revealed that this coordination enhances grooming efficiency by avoiding obstructions and enabling forceful limb-antennal interactions. Manipulations showed proprioceptive feedback is not necessary for body-part synchronization, implying feedforward neural control. Connectome network analyses and simulations identified centralized interneurons forming recurrent excitatory and broad inhibitory circuit motifs that robustly synchronize motor modules. We further validated some model predictions through optogenetic experiments.


Discussion

We identified centralized neural circuits underlying multi-body-part coordination during antennal grooming in flies. Unlike locomotion, where coordination often depends on sensory feedback, grooming synchronization is centrally driven, likely reducing sensory processing demands. We uncovered two neural circuit motifs—recurrent excitation promoting targeted movements, and broadcast inhibition suppressing competing actions—that enable precise yet flexible coordination. This centralized circuit architecture may represent a general neural strategy conserved across behaviors and species, simplifying motor control and facilitating the evolution of complex behaviors through modular coordination.



Acknowledgements
PR acknowledges support from an SNSF Project Grant (175667) and an SNSF Eccellenza Grant (181239). JA acknowledges support from a European Research Council Synergy grant (951477). PGO acknowledges support from a Swiss Government Excellence Scholarship for Doctoral Studies and a Google PhD Fellowship.


References
● https://doi.org/10.1016/S0959-4388(98)80114-1
● https://doi.org/10.1152/jn.00658.2017
● https://doi.org/10.1016/j.celrep.2021.109730

● https://doi.org/10.1038/s41592-022-01466-7
● https://doi.org/10.1038/s41586-024-07558-y



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P233: Extracellular K+ hotspots regulate synaptic integration in the dendrites of pyramidal neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P233 Extracellular K+ hotspots regulate synaptic integration in the dendrites of pyramidal neurons


Malthe S. Nordentoft1, Naoya Takahashi2, Mathias S. Heltberg1, Mogens H. Jensen1, Rune N. Rasmussen3,Athanasia Papoutsi*4
1 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
2 Interdisciplinary Institute for Neuroscience, University of Bordeaux, Bordeaux, France
3 Center for Translational Neuromedicine, University of Copenhagen, Copenhagen, Denmark
4 Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology—Hellas, Crete, Greece

*Email:papoutsi@imbb.forth.gr



Introduction
Throughoutthenervoussystem,neuronal activity and ionic changes in the extracellular environment are bidirectionally linked. Changesintheconcentration of extracellularK+ions ([K+]o)are particularly intriguing due to its pivotal role in shaping neuronal excitability and its activity- and state-dependent fluctuation [1]. At the synaptic level, [K+]ochanges arise mainly from the activation of NMDA receptors and are highly localized [2]. Despite this experimental evidence, local, activity-dependent [K+]ochanges have not been considered an integral part of neuronal signaling. In this work [3], we hypothesize that [K+]ochanges form “K+hotspots” that locally regulate the active dendritic properties and shape sensory processing.
Methods
We focus on the organization of orientation-tuned synapses on dendrites of visual cortex pyramidal neurons [4], as we have previously shown that visual cortex responses are dynamically regulated by [K+]oand brain states [1]. We first analytically investigate the spatial diffusion of K+ions, to evaluate the creation of“K+hotspots”. Following, by treating orientation-tuned inputs to dendritic segments as statistical ensembles, we infer the expected changes in Δ[K+]oand the correspondingEK+shifts. Finally, using biophysically realistic models of a point dendrite anda morphologically detailed neuron, weevaluate theeffect of the differentEK+shifts in thedendritic spike propertiesand the neuronal output.
Results
Our statistical approach identified the expectedEK+shifts under different extracellular space sizes, intracellularK+concentration changes, and presented stimuli. Importantly, dendritic segments receiving similarly-tuned inputs attain substantially higher [K+]oandEK+shifts, with theEK+shifts being within the 6-18mV range. In the point dendrite model, this range ofEK+shifts broadens dendritic spikes and increases dendritic spike probability. Finally, in the morphologically detailed neuron models, we show that the local activity-dependent[K+]oincrease andEK+shifts in dendrites enhance the effectiveness of distal synaptic inputs to cause feature-tuned firing of neurons, without comprising feature selectivity.
Discussion
In this work [3] we show that dendrites receiving similarly-tuned inputs support activity-dependent, local changes in[K+]o, forming “K+hotspots”. These hotspots depolarizeEK+and increase the reliability and duration of dendritic spikes. These effects act as a volume knob of dendritic input, promoting gain amplification of neuronal output without affecting the feature selectivity. Overall, compared to long-term plasticity mechanisms, “K+hotspots” are transient, closely follow the overall dendritic activity levels and selectively boost integration of synaptic inputs with minimum usage of resources. Our results therefore suggest a prominent and previously overlooked role of [K+]ochanges.



Acknowledgements
We thank Akihiro Matsumoto, Alessandra Lucchetti, Eva Maria Meier Carlsen, Ioannis Matthaiakakis, and Stamatios Aliprantis for discussions and comments on this work.
References

1.https://doi.org/10.1016/j.celrep.2019.06.082
2.https://doi.org/10.1016/j.celrep.2013.10.026
3.https://doi.org/10.1371/journal.pbio.3002935*PLoS Biology Issue Image | Vol. 22(12) January 2025
4.https://doi.org/10.1038/s41467-019-13029-0


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P234: Complexity of an astrocyte-neuron network model in random and hub-driven connectivity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P234 Complexity of an astrocyte-neuron network model in random and hub-driven connectivity

Paolo Paradisi*1,2,Giulia Salzano3,Marco Cafiso1,4, Enrico Cataldo4



1ISTI-CNR-Institute of Information Science and Technologies “A. Faedo”, Pisa, Italy


2BCAM-Basque Center for Applied Mathematics, Bilbao, Basque Country, Spain


3Department of Neuroscience, International School for Advanced Studies, Trieste, Italy


4Department of Physics, University of Pisa, Pisa, Italy



Introduction



The role of glial cells, particularly astrocytes, in brain neural networks has been historically


overlooked due to a neuron-centric perspective. Recent research highlights astrocytes’


involvement in synaptic modulation, memory formation, and neural synchronization, leading to


their inclusion in mathematical brain models. Concurrently, network topology plays a critical role


inneural function, with models such as random and scale-free networks offering insights into


connectivity patterns. In this work we present the investigation of a recently published astrocyte-


neuron network model [1,2], hereafter named SGBD model, consisting of excitatory and


inhibitory



leaky-integrate and fire (LIF) neural models endowed with astrocytes, activated by synaptic


transmission and modulating


Methods
Firstly, a modified version of the model is proposed in order to overcome the limitations of the SGBD model by incorporating biologically plausible features that are more compatible with the experimental results, in particular with regard to the spatial distribution of inhibitory neurons, astrocyte dynamics such as to trigger more realistic calcium oscillations, and neuron-astrocyte connections that are more intuitively linked to their spatial positioning. Then, the role of neuron-neuron connectivity is investigated by comparing randomandhub-driven connectivitiesinboth incoming and outcoming connections.Simulations are implemented using the Brian2 simulator, allowing for a comparative analysis of neural network activity with and without astrocytes.




Results
The proposed modifications lead to a more biologically realistic representation, influencing firing rates and inter-spike interval distributions. Comparisons between random and hub-driven connectivity highlight differences in network efficiency, in particular firing activity is much larger for hub-driven connectivity even if the number of links is much lower with respect to the random connectivity. Temporal complexity of avalanches is investigated through intermittency-driven complexity tools [3,4] and significant differences are found when comparing both random vs. hub-driven and astrocyte vs. no-astrocyte.

Discussion
This study reinforces the importance of astrocytes in neural network modeling and demonstrates how connectivity patterns impact temporal complexity of firing patterns. Hub-driven degree distribution is not strictly scale-free, i.e., does not display power-law decay, but, despite this, hub-driven topology triggers the emergence of power-law behavior in the inter-spike time distributions that does not emerge in the random connectivity. Similar findings are seen in the temporal complexity of neural avalanches, where different regimes of power-law scaling behavior are found.







Acknowledgements
This work was supported by the Next-Generation-EU programme under the funding schemes






































PNRR-PE-AI scheme (M4C2, investment 1.3, line on AI) FAIR “Future Artificial Intelligence






































Research”, grant id PE00000013, Spoke-8: Pervasive AI.






































References












[1] M. Stimberg et al. (2019), Brian 2, an intuitive and efficient neural simulator, elife8, e47314.










































doi:10.7554/eLife.47314


















































[2] M. Stimberg et al. (2019), Modeling Neuron–Glia Interactions with the Brian 2 Simulator,












































Springer, Cham, 471–505. doi:10.1007/978-3-030-00817-8_18




















































[3] P. Paradisi, P. Allegrini, Intermittency-driven complexity in signal processing (2017), Springer,






































Cham, 161–195. doi: 10.1007/978-3-319-58709-7_6
















































[4] P. Paradisi et al., The emergence of self-organization in complex systems-Preface (2015),










































Chaos Sol. Fract.81b, 407-411. doi: 10.1016/j.chaos.2015.09.017




























































Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P235: Beyond the Response: How Post-Response EEG Signals Improve Lie Detection
Tuesday July 8, 2025 17:00 - 19:00 CEST
P235 Beyond the Response: How Post-Response EEG Signals Improve Lie Detection

Hanbeot. Park¹, Hoon-hee. Kim*²

¹ Department of Data Engineering, Pukyong National University, Busan, Korea
²Department ofComputerEngineeringand Artificial Intelligence, Pukyong National University, Busan, Korea


*Email:h2kim@pknu.ac.kr





Introduction

In modern society, lies intentional or not are widespread and impose cognitive burdens and neurophysiological changes. Lying produces psychological tension, extra cognitive processing, and emotional strain, which are reflected in distinct neural activity patterns. While earlier lie detection studies focused on response EEG signals, recent research suggests that response activity capturing further evaluation and lingering tension provides critical information for distinguishing deception from truth.[1]EEG from 12 subjects were recorded during responses and for 15 seconds post-response. Extracted features were classified using a sliding window machine learning approach, with post-response features, enhancing classification performance.


Methods
Using a uniform 64 channel EEG system, this study investigated deception by recording EEG from 12 subjects who answered six questions under lie or truth conditions. Data were recorded during the response period and for 15 seconds post-response. Preprocessing steps included bandpass filtering, notch filtering, artifact removal, average referencing, and downsampling. To capture both local and long-term patterns, Fig1 is a multi-layer model was built by combining SSM based Mamba[2]with the MoE[3]technique. Statistical features and neural features were extracted. EEG data were segmented into 0.5 second windows with a 0.025 second overlap, and question-level cross-validation identified the most informative time interval for lie detection.

Results
The classification model evaluation confirmed that EEG features from various time intervals significantly differentiate lies from truth, as shown by question-level cross-validation. Features from the post-response interval significantly outperformed those from the pre-response interval (P < 0.005), with the effective features achieving a performance improvement of 0.150 ± 0.007. Moreover, intervals covering the entire post-response period yielded the best results. Notably, skewness, kurtosis, zero crossing, and sample entropy effectively capture the non-linear, dynamic EEG changes associated with additional cognitive processingresponses after answering, underscoring their potential as key neurophysiological indicators for lie detection.

Discussion
Using question-level CV, this study confirmed that several statistical and neurophysiological EEG features from the post-response interval significantly enhanced lie detection performance compared to those from the pre-response interval (P < 0.005). These findings suggest that subjects sustain tension and engage in extra cognitive processing after responding, producing distinct neural patterns of deception. Although the small sample size and use of question-level CV may limit generalizability, post-response EEG data provided more stable and reliable neural patterns. Future studies should use subject-level CV and further explore the optimal duration of the post-response interval.




Figure 1. Overall Structure of Lie Detection. This architecture employs data processing and feature extraction, followed by a multi-layer model that leverages Mamba and MoE.
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
J. Gao et al., “Brain Fingerprinting and Lie Detection: A Study of Dynamic Functional Connectivity Patterns of Deception Using EEG Phase Synchrony Analysis,” IEEE J Biomed Health Inform, vol. 26, no. 2, pp. 600–613, Feb. 2022, doi: 10.1109/JBHI.2021.3095415.
A. Gu and T. Dao, “Mamba: Linear-Time Sequence Modeling with Selective State Spaces,” Dec. 2023, [Online]. Available: http://arxiv.org/abs/2312.00752
N. Shazeer et al., “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,” Jan. 2017, [Online]. Available: http://arxiv.org/abs/1701.06538
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P236: Arousal-driven parametric fluctuations augment computational models of dynamic functional connectivity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P236 Arousal-driven parametric fluctuations augment computational models of dynamic functional connectivity

Anagh Pathak1*, Demian Battaglia1

1Laboratoire De Neurosciences Cognitives et Adaptives , University of Strasbourg, France

*Email: a.pathak@unistra.fr


Introduction

Functional Connectivity (FC) quantifies statistical dependencies between brain regions but traditionally assumes stationarity. Dynamic Functional Connectivity (DFC) captures temporal fluctuations, offering insights into cognition and brain disorders [1]. However, DFC’s interpretation is debated, with concerns about neural vs. non-neural origins [2]. Arousal fluctuations, driven by neuromodulation, likely shape DFC. This study extends whole-brain models by incorporating time-varying neuromodulatory inputs, improving the replication of empirical DFC patterns. Findings suggest arousal plays a crucial role in DFC dynamics, refining our understanding of brain network organization.
Methods
The study analyzes resting-state fMRI data from 100 individuals in the Human Connectome Project [3]. Whole-brain models were built using structural connectivity data, employing two autonomous models: an oscillatory Stuart-Landau model [4] and a multistable Wong-Wang model [5]. A time-dependent modification, modeled as a Ornstein-Uhlenbeck process (tMFM) was introduced in the global excitability term. Dynamic Functional Connectivity (DFC) , measured using a sliding window approach and DFC speeds served as the model fitting targets. A Genetic Algorithm optimized model parameters by fitting simulated data to empirical observations, using statistical metrics (AIC/BIC) to compare model performance.
Results
Dynamic Functional Connectivity (DFC) was analyzed in resting-state fMRI using a sliding window approach, revealing two distinct phenotypes: drift and pulsatile. The drift phenotype showed a gradual slowing of dynamics, while the pulsatile phenotype exhibited brief, well-defined epochs of slow events. Two modeling approaches were explored: the eMFM (noise-driven bistability) and the MOM model (metastable oscillatory dynamics). Both generated transient DFC but failed to fully capture empirical patterns. Introducing arousal-linked modulations in excitability (tMFM) significantly improved model fit, with linear drift capturing drift phenotypes and mean-reverting dynamics modeling pulsatile phenotypes.

Discussion
This study explores how incorporating time-varying parameters, specifically arousal-linked fluctuations, improves dynamic functional connectivity (DFC) modeling. Traditional models assume time-invariant dynamics, but evidence suggests cortical excitability varies with arousal. By integrating stochastic arousal terms into the eMFM framework (tMFM), we show that DFC is better captured as a time-dependent process. Compared to the oscillatory MOM model, tMFM more accurately reproduces empirical DFC patterns, though future work could extend MOM to include neuromodulatory influences. Additionally, linking DFC with pupillometry—an arousal proxy—could further refine models, offering deeper insights into neuromodulation, brain states, and cognition.



Acknowledgements
The authors acknowledge support from PEPR BHT, Fondation Vaincre Alzheimers, CNRS and the University of Strasbourg
References
1.https://doi.org/10.1016/j.neuroimage.2013.05.0792.https://doi.org/10.1162/imag_a_00366
3.10.1016/j.neuroimage.2016.05.062
4.https://doi.org/10.1038/s42005-022-00950-y
5.https://doi.org/10.1016/j.neuroimage.2014.11.001









Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P237: Modelling of ensemble of signals in single axons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P237 Modelling of ensemble of signals in single axons

Tanel Peets*1, Kert Tamm1, Jüri Engelbrecht1,2


1Department of Cybernetics, Tallinn University of Technology, Tallinn, Estonia
2Estonian Academy of Sciences, Tallinn, Estonia


*Email: tanel.peets@taltech.ee

Introduction
Since Hodgkin and Huxley’s classical works, it has become clear that nerve function is a richer phenomenon than just electrical action potentials (AP). Experimental observations demonstrate that electrical signals in nerve fibres are accompanied by mechanical and thermal effects[1,2,3]. These include the pressure wave (PW) in axoplasm, the longitudinal wave (LW) in biomembrane, the transverse displacement (TW) of a biomembrane and temperature changes (θ). The whole nerve signal is, therefore, an ensemble of primary waves accompanied by the secondary components. The primary components (AP, LW, PW) are characterised by corresponding velocities and the secondary components (TW, θ) are derived from the primary components and have no independent velocities on their own.
Methods
We present a coupled mathematical model [2] which unites the governing equations for the action potential, the pressure wave in the axoplasm and the longitudinal and the transverse waves in the surrounding biomembrane and corresponding temperature change into one system of equations. The electrical AP is the carrier of information and triggers all other processes. The main attention is on modelling effects accompanying the AP, therefore the AP itself is modelled by the simple FitzHugh-Nagumo model. Coupling effects are modelled by contact forces. The system of nonlinear partial differential equations is solved numerically making use of the pseudospectral method.
Results
As a proof of concept, a simple dimensionless model based on the description of physical effects is described involving all the components of the signal. The results obtained by the numerical simulation match qualitatively well with experimentally measured ones.
Discussion

The model described in this contribution is an attempt to couple all the measurable effects of the signal propagation in nerves (axons) into a system. The attention is not on the detailed description of the AP but on possible accompanying mechanical and thermal effects and their coupling with each other. The governing equations for the elements of the ensemble stem from the laws of physics and form a consistent system. This is an interdisciplinary approach at the interface of physiology, physics, and mathematics[2].



Acknowledgements
This research was supported by the Estonian Research Council (PRG 1227). Jüri Engelbrecht acknowledges the support from the Estonian Academy of Sciences.
References
[1]https://doi.org/10.1016/S0006-3495(89)82902-9
[2]https://doi.org/10.1007/978-3-030-75039-8
[3]https://doi.org/10.1073/pnas.192003911
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P238: Identifying cortical learning algorithms using brain-machine interfaces
Tuesday July 8, 2025 17:00 - 19:00 CEST
P238 Identifying cortical learning algorithms using brain-machine interfaces

Sofia Pereira da Silva1,2, Denis Alevi1, Friedrich Schuessler*1,3, Henning Sprekeler*1,2,3


1 Modelling of Cognitive Processes, Technische Universität Berlin, Berlin, Germany
2 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
3 Science of Intelligence, Technische Universität Berlin, Berlin, Germany

Email: sofia@bccn-berlin.de
Introduction

By causally mapping neural activity to behavior [1], brain–machine interfaces (BMI) offer a means to study the dynamics of sensorimotor learning. Here, we investigate the neural learning algorithm monkeys use to adapt to a changed output mapping in a center-out reaching task [2]. We exploit that the mapping from neural space (ca. 100 dimensions) to the 2D cursor position is a credit assignment problem [3] that is underconstrained, because changes along a large number of output-null dimensions do not influence the behavioral output. We hypothesized that different, but equally performing learning algorithms can be distinguished by the changes they generate in output-null dimensions.

Methods
We combine computational modeling and data analysis to study the neural algorithms underlying learning in the BMI center—out task. We implement networks with three different learning rules (gradient descent, model-based feedback alignment, and reinforcement learning) and three distinct learning strategies (direct, re–aiming [4], remodeling [5]) in feedforward and recurrent architectures. The models’ initial conditions are constrained using publicly available data from BMI experiments [6, 7,8]. We train the models in cursor space and use linear regression to compare the resulting changes in neural space to the data.


Results
We first verify that all implemented algorithms can learn the task in cursor space. In terms of neural activity, we find that various combinations of rules and architectures lead to changes in different low–dimensional subspaces. For instance, re-aiming is, by definition, constrained to a lower-dimensional subspace, so the neural activity changes across algorithms within this strategy are more similar than those in other strategies. Comparing the changes in neural activity and their subspaces with available data from BMI experiments points to learning as a combination of different algorithms. However, not all variance is explained by the algorithms, indicating additional changes outside the modeled subspaces.

Discussion
Bridging BMI experiments and population dynamics analyses creates a framework to study how learning unfolds in the brain. Our results suggest that monkeys employ a combination of previously suggested strategies to learn BMI tasks, involving both model-based and model-free learning. Future work should explore models with recurrent architectures further to better capture biological dynamics. Moreover, applying methods that describe the learning manifolds and trial-to-trial variability could offer interesting insights for comparing the models and data. Finally, comparing our findings with longitudinal datasets that monitor the learning process over time would be valuable for understanding how the learning dynamics progress.





Acknowledgements

References
1.https://doi.org/10.1016/j.conb.2015.12.005
2.https://proceedings.neurips.cc/paper_files/paper/2022/hash/a6d94c38506f16fb50894a5b555f2c9a-Abstract-Conference.html
3.https://doi.org/10.1371/journal.pcbi.1008621
4.https://doi.org/10.1101/2024.04.18.589952
5.https://doi.org/10.7554/eLife.10015
6.https://doi.org/10.1038/s41593-018-0095-3
7.https://doi.org/10.1038/s41593-021-00822-8
8.https://doi.org/10.7554/eLife.36774
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P239: Striatal endocannabinoid long-term potentiation mediates one-shot learning
Tuesday July 8, 2025 17:00 - 19:00 CEST
P239 Striatal endocannabinoid long-term potentiation mediates one-shot learning

Charlotte PIETTE1, Arnaud HUBERT2,3, Sylvie PEREZ1, Hugues BERRY2,3, Jonathan TOUBOUL4#, Laurent VENANCE1#


1Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology, Collège de France, CNRS, INSERM, Université PSL, 75005 Paris, France
2INRIA, Villeurbanne, France
3University of Lyon, LIRIS UMR5205, Villeurbanne, France4Brandeis University, MA Waltham, USA

#: co-senior authors


Correspondence:laurent.venance@college-de-france.fr,jtouboul@brandeis.edu



Introduction

One-shot learning - the behavioral and neuronal mechanisms underlying the acquisition of a long-term memory after a unique and brief experience -, is a crucial mechanism for developing adaptive responses. Yet its neural correlates remain elusive(see for review: Piette et al., 2020). Here, we aimed at elucidating how changes in cortico-striatal dynamics contribute to one-shot learning. Considering that a brief exposure to a stimulus involves only a few spikes and based on our earlier work uncovering a new form of endocannabinoid-dependent synaptic potentiation (eCB-LTP) induced by a very low number of temporally coupled cortical and striatal spikes (Cui et al., 2015 & 2016; Xu et al. 2018), we hypothesize that the endocannabinoid system could underlie striatal one-shot learning.




Methods

We first developed a one-shot learning test in which mice learn to avoid contact with an adhesive tape after a single exposure. We then usedin vivoandex vivoelectrophysiological recordings in the striatum of behaving mice to probe cortico-striatal plasticity and the specific contribution of the endocannabinoid system. In addition, based on Neuropixels recordings of cortical and striatal neuronsin vivo, we developed a mathematical model to test the induction of eCB-LTP. Finally, we test the performance of transgenic mouse strains in which eCB-LTP is altered and in mice in which local striatal infusion of drugs prevent either NMDA or eCB-mediated plasticities.

Results
The “sticky tape avoidance test” proved an efficient one-shot learning test, since following a single and short (< 20 seconds) uncomfortable contact with an adhesive tape, mice avoided further contact. We found a cortico-striatal long-term synaptic potentiation emerged 24h after short contacts with the tape. Furthermore, thedetailed computational model of cortico-striatal synapse predicted an increased occurrence of eCB-LTP induction events during contact.Indeed,ex vivowhole-cell patch-clamp recordings revealed an occlusion of eCB-LTP in mice shortly exposed to the sticky tape. In addition, we showed that eCB-LTP knock-out mice and AM251-infused mice exhibited impaired one-shot learning, while no significant difference was observed between D-AP5 and saline-infused mice.


Discussion
These multiple approaches demonstrate that eCBs underlie one-shot learning.Overall,these findings revisit the recently challenged view that dorsolateral striatum is involved mostly in habit formation. For the first time, they outline the temporal and activity-dependent boundaries delineating the expression of a synaptic plasticity pathway within a learning paradigm. Such insights into the nature and roles of eCB-based plasticity will also offer keys to interpreting the wide array of functions of the eCB system.






Acknowledgements
We thank S. R. Datta and the Venance lab members for helpful suggestions and critical comments on the manuscript. Camille Chataing and Emma Idzikowkski for their help on the behavioral experiments at one-month retrieval interval. Yves Dupraz (CIRB micromechanics workshop) for the building of the arenas, cross-maze and electrophysiology micromechanics.
References
1. Piette, C., Touboul, J., Venance, L. (2020). Engrams of fast learning.Front. Cell. Neurosci.,14. 10.3389/fncel.2020.575915
2.Cui Y., et al.(2015).Endocannabinoids mediate bidirectional striatal spike-timing-dependent plasticity.J. Physiol.593, 2833–2849. 10.1113/JP270324
3. Cui Y., et al.(2016).Endocannabinoid dynamics gate spike-timing dependent depression and potentiation.eLife5:e13185. 10.7554/eLife.13185
4. Xu H., et al.(2018).Dopamine-endocannabinoid interactions mediate spike-timing-dependent potentiation in the striatum.Nat. Commun.9:4118. 10.1038/s41467-018-06409-5

Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P240: Deep brain stimulation restores information processing in parkinsonian cortical networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P240 Deep brain stimulation restores information processing in parkinsonian cortical networks

Charlotte Piette1,2, Sophie Ng Wing Tin3,4, Astrid De Liège5, Coralie Bloch-Queyrat6, Bertrand Degos1,5#, Laurent Venance1#, Jonathan Touboul2#

1Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology, Collège de France, CNRS, INSERM, PSL University, 75005 Paris, France
2Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, MA Waltham, USA
3Service de Physiologie, Explorations Fonctionnelles et Médecine du Sport,Assistance Publique-Hôpitaux de Paris(AP-HP), Avicenne University Hospital, Sorbonne Paris Nord University, 93009 Bobigny, France
4Inserm UMR 1272,Sorbonne Paris Nord University, 93009 Bobigny, France
5Department of Neurology, Avicenne University Hospital, Sorbonne Paris Nord University, 93009 Bobigny, France
6Department of Clinical Research, Avicenne University Hospital, Assistance Publique-Hôpitaux de Paris (AP-HP), 93009, Bobigny, France


Corresponding authors: jtouboul@brandeis.edu ; charlotte_piette@hms.harvard.edu
Introduction

Parkinson’s disease (PD) is characterized by alterations of neural activity and information processing in the basal ganglia and cerebral cortex, including changes in excitability (Lindenbach and Bishop, 2013; Valverde et al., 2020) and abnormal synchronization (Goldberg et al., 2002) in the motor cortex of PD patients and PD animal models. Deep Brain Stimulation (DBS) provides an effective symptomatic treatment in PD but its mechanisms of action, enabling the restoration of efficient information transmission through cortico-basal ganglia circuits, remain elusive. Here, we developed a computational framework to test DBS impact on cortical network dynamics and information encoding depending on the network’s initial levels of excitability and synchronization.


Methods
We extended a computational model initially developed in our previous work (Valverde et al., 2020) to analyze the responses of a spectrum of cortical pathological networks, characterized by their level of activity and synchronization, to various input patterns.This way, we could compare their capacity of encoding and transmitting information, before and after DBS stimulation.To further test the hypothesis that DBS positively impacts cortical information transmission in the clinics, we investigated whether PD treatment could improve the ability to predict movement from electroencephalograms collected in human parkinsonian patients(collected in the Neurology Department of Avicenne Hospital, Bobigny).



Results
We observed thatDBS efficiently reduces the firing rate in a large spectrum of parkinsonian networks, and in doing so can decrease abnormal synchronization levels. In addition, DBS-mediated improvements of information processing were most exacerbated in synchronized regimes. Interestingly, DBS efficiency was modulated by the configuration of the cortical circuit such that optimal DBS parameters varied depending on the pathological cortical activity and connectivity profile. We further validated our hypothesis in the clinics and found that the accuracy of decoding movement identity from cortical dynamics was worse when DBS was turned off and correlated with the extent of drug treatment.



Discussion
Overall, this work highlights how DBS improves information encoding by resetting cortical networks into highly responsive states. Cortical networks therefore stand as a privileged target for alternative therapies and adaptive DBS. Our final experiments on human electrophysiology open newperspectives for adaptively tuning DBS parameters, based on clinically accessible measures of cortical information processing capacity.





Acknowledgements
We thank J.E. Rubin, P. Miller, the members of the LV and JT laboratory for their helpful suggestions and critical comments. We thank theService de Physiologie, Explorations Fonctionnelles et Médecine du Sport, Avicenne University Hospital, and the Clinical Research Unit of Avicenne University Hospital, for making the EEG recordings possible.
References
1. Lindenbach, D., & Bishop, C. (2013). Critical involvement of the motor cortex in the pathophysiology and treatment of Parkinson’s disease.Neurosci. & Biobehavioral Rev.,37(10), 2737–2750.
2. Valverde, S., et al.(2020). Deep brain stimulation-guided optogenetic rescue of parkinsonian symptoms.Nat. Comm.,11(1), 2388.
3.Goldberg, J. A., et al. (2002). Enhanced synchrony among primary motor cortex neurons in the MPTP primate model of Parkinson’s disease.J. Neurosci.,22(11), 4639–4653.
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P241: Parameter Estimation in Differentiable Whole Brain Networks: Methodological Explorations and Practical Limitations
Tuesday July 8, 2025 17:00 - 19:00 CEST
P241 Parameter Estimation in Differentiable Whole Brain Networks: Methodological Explorations and Practical Limitations

Marius Pille* ¹ ², Emilius Richter¹ ², Leon Martin¹ ², Dionysios Perdikis¹ ², Michael Schirner¹ ² ³ ⁴ ⁵, Petra Ritter¹ ² ³ ⁴ ⁵

¹ Berlin Institute of Health (BIH) at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
² Department of Neurology with Experimental Neurology, Charité, Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
³ Bernstein Focus State Dependencies of Learning and Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany
⁴ Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117, Berlin, Germany
⁵ Einstein Center Digital Future, Wilhelmstraße 67, 10117, Berlin, Germany

*Email: marius.pille@bih-charite.de
Introduction

Connectome-based brain network modelling, facilitated by platforms like The Virtual Brain (TVB), has significantly advanced computational neuroscience by providing a framework to decipher the intricate dynamics of the brain. However, existing techniques for inferring physiological parameters from neuroimaging data, such as functional magnetic resonance imaging, magnetoencephalography and electroencephalography, are often constrained by computational costs, developer effort, and limited available data, blocking translation [1].

Methods

Differentiable Models [2] address these limitations by enabling the application of state-of-the-art parameter estimation techniques from machine learning, particularly the family of stochastic gradient descent optimizers. We reformulated brain network models using highly optimized differentiable libraries, creating generalized, composable building blocks for complex modeling problems. This approach was tested across different types of neural mass models with various neuroimaging data types, to demonstrate advantages and limitations.

Results

Our differentiable framework demonstrates performance improvements of one to two orders of magnitude compared to classical TVB implementations, with the added benefit of easy parallelization across devices like GPUs. By leveraging a computational knowledge base for brain simulation [3], our approach preserves flexibility while accommodating diverse neural mass models. We established documented workflows for the most common modeling problems, building from low to high complexity, to enhance accessibility. Limitations of differentiable models, where the proximity to bifurcation points can lead to unstable gradients, are explored and potential solutions are proposed, drawing from the field of classical neural networks [4].

Discussion

This work aims to contribute to the translation of brain network models from foundational research to clinical applications by addressing existing roadblocks [1]. By creating reusable, composable components rather than specific solutions, we provide a versatile framework that can adapt to diverse research questions. The significant performance improvements enable more complex hypotheses to be tested and potentially bring computational neuroscience tools closer to practical clinical implementation.





Acknowledgements
I would like to express my sincere gratitude to my supervisors for their continuous feedback and valuable advice on this work. Special thanks to Petra Ritter for her guidance and for providing all the necessary resources that made this research possible.
References
[1] Fekonja, L. S. et al. (2025). Translational network neuroscience: Nine roadblocks and possible solutions. Network Neuroscience, 1–19. doi.org/10.1162/netn_a_00435
[2] Sapienza, F. et al. (2024). Differentiable Programming for Differential Equations: A Review. arXiv. arxiv.org/abs/2406.09699
[3] Martin, L. et al. (in preparation). The Virtual Brain Ontology: A computational knowledge space generating reproducible models of brain network dynamics.
[4] Pascanu, R. et al. (2013). On the difficulty of training Recurrent Neural Networks. arXiv. doi.org/10.48550/arXiv.1211.5063
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P242: Cytoelectric coupling: How electric fields tune Hebb’s cell assemblies
Tuesday July 8, 2025 17:00 - 19:00 CEST
P242 Cytoelectric coupling: How electric fields tune Hebb’s cell assemblies

Dimitris A. Pinotsis1,2, Earl K. Miller2


1Department of Psychology, City St George's —University of London, London EC1V 0HB, United Kingdom
2 The Picower Institute for Learning & Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA


*Email: pinotsis@mit.edu

Introduction

Hebb introduced cell assemblies in his seminal work about 70 years ago. Today, cell assemblies are thought to describe groups of neurons coactivated when a certain memory, thought or percept is stored or processed. Here, we consider electric fields generated by cell assemblies.

Methods
We analyzed local field potentials (LFPs) recorded during a working memory task. These were obtained using high resolution, multi-electrode arrays and allow one to capture details of neural activity at the microscopic level. During the task, the animals, were shown a dot in one of six positions on the edge of a screen that would then go blank. After the delay period, the animals saccaded to the position they just saw marked. Using deep neural networks and biophysical modeling, we obtained the latent space associated with each memory. This allowed us to reconstruct the effective connectivity between different neuronal populations within the patch. Using a dipole model from electromagnetism, we predicted the electric field.
Results
We consider electric fields generated by cell assemblies. We show that they are more stable and reliable than neural activity. Fields appear to contain more information and to vary less across trials where the same memory was maintained. We here suggest that stability underlying memory maintenance is achieved at the level of the electric field. This is ‘above’ the brain, but still ‘of’ the brain. The field could direct the activity of participating neurons.
Discussion
Our analyses suggest that electric fields generated by neurons are causal down to the level of the cytoskeleton. Ephaptic coupling organizes neural activity, forming neural ensembles and low dimensional representations at the macroscale level. We suggest that this can go all the way down to the molecular level to stabilize and tune the cytoskeleton for efficient information processing. We call this the Cytoelectric Coupling hypothesis.



Acknowledgements
This work is supported by UKRI (ES/T01279X/1), Office of Naval Research (N00014-22-1-2453), The JPB Foundation, and The Picower Institute for Learning and Memory.
References
Pinotsis, D. A., & Miller, E. K. (2022). Beyond dimension reduction: Stable electric fields emerge from and allow representational drift. NeuroImage, 253, 119058.


Pinotsis, D. A., & Miller, E. K. (2023). In vivo ephaptic coupling allows memory network formation. Cerebral Cortex, 33(17), 9877-9895.


Pinotsis, D. A., Fridman, G., & Miller, E. K. (2023). Cytoelectric coupling: Electric fields sculpt neural activity and “tune” the brain’s infrastructure. Progress in Neurobiology, 226, 102465.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P243: Hierarchical fluctuations scales in whole-brain resting activity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P243 Hierarchical fluctuations scales in whole-brain resting activity

Adrián Ponce-Alvarez1,2,3*

1Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain.
2Institut de Matemàtiques de la UPC - Barcelona Tech (IMTech), Barcelona, Spain.
3Centre de Recerca Matemàtica, Barcelona, Spain.


*Email :adrian.ponce@upc.edu
Introduction

Brain activity fluctuates at different timescales across regions, with higher-order areas exhibiting slower dynamics than sensory regions [1]. Connectivity and local properties shape this hierarchy: spine density and synaptic gene expression gradients correlate with timescales [2–4], while strongly connected regions exhibit slower dynamics [5].
Beyond temporal features, signal variability has been linked to aging [6], brain states [7], disorders [8], and tasks [9]. However, whether spontaneous activity variance is hierarchically organized remains unknown.
This work analyses the relation between timescales, variances, and connectivity using human f/dMRI data, while exploring the mechanisms through connectome-based whole-brain models.
Methods
Publicly available data from the Human Connectome Project was used, consisting of connectome matrices and resting-state (rs) fMRI signals from 100 subjects across 3 parcellations. For each ROI, the average variance of the rs-fMRI signal, the node’s strength of the connectome, and the autocorrelation function (ACF) were calculated.

To model the variance and temporal scales of resting-state fluctuations, two commonly used whole-brain models were studied here, namely the Hopf and the Wilson-Cowan models. These models use the brain’s connectome to coupled local nodes displaying noise-driven oscillations, with intrinsic dynamics either homogeneous or constrained by the T1w/T2w macroscopic gradient.
Results
Results show that while more connected brain regions have longer timescales, their activity fluctuations exhibit lower variance. Using the Hopf and Wilson-Cowan models, we found that variance and timescales can oppositely relate to connectivity within specific model’s parameter regions, even when all nodes have the same intrinsic dynamics —but also when intrinsic dynamics are constrained by the myelinization-related macroscopic gradient. These findings suggest that connectivity and network state alone can explain regional differences in fluctuation scales. Ultimately, timescale and variance hierarchies reflect a balance between stability and responsivity, with faster, greater responsiveness at the periphery and robustness at the core.
Discussion
This study shows that the variance of fluctuations is hierarchically organized but, in contrast to timescales, it decreases with structural connectivity. Whole-brain models show that the hierarchies of timescales and variances jointly emerge within specific parameter regions, indicating a state-dependence that could serve as a biomarker for different behavioral, vigilance, or conscious states, and neuropsychiatric disorders Finally, in line with previous works on principles of core-periphery network structures [10–12], these hierarchies link to the responsivity of different network parts, with greater and faster responsiveness at the network periphery and more stable dynamics at the core, achieving a balance between stability and responsiveness.



Acknowledgements
A.P-A. is supported by the Ramón y Cajal Grant RYC2020-029117-I funded by MICIU/AEI/10.13039/501100011033 and "ESF Investing in your future". This work is supported by the Spanish State Research Agency, through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M).
References
1.https://doi.org/10.1038/nn.3862
2.https://doi.org/10.1093/cercor/bhg093
3.https://doi.org/10.1016/j.neuron.2015.09.008
4.https://doi.org/10.1038/s41593-018-0195-0
5.https://doi.org/10.1162/netn_a_00151
6.https://doi.org/10.1523/JNEUROSCI.5641-10.2011
7.https://doi.org/10.1098/rsif.2013.0048
8.https://doi.org/10.1371/journal.pcbi.1012692
9.https://doi.org/10.1523/JNEUROSCI.2922-12.2013
10.https://doi.org/10.1038/nrg1471
11.https://doi.org/10.1093/comnet/cnt016
12.https://doi.org/10.1098/rstb.2014.0165
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P244: Multi-network Modeling of Parkinson’s Disease: Bridging Dopaminergic Modulation and Vibrotactile Coordinated Reset Therapy
Tuesday July 8, 2025 17:00 - 19:00 CEST
P244 Multi-network Modeling of Parkinson’s Disease: Bridging Dopaminergic Modulation and Vibrotactile Coordinated Reset Therapy

Mariia Popova*1, Fatemeh Sadeghi2, Simone Zittel2, Claus C Hilgetag1,3

1Institute of Computational Neuroscience, Hamburg Center of Neuroscience, University Medical Center Hamburg-Eppendorf (UKE), Hamburg University, Hamburg, Germany
2Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
3Center for Biomedical AI, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

*Email: m.popova@uke.de
Introduction

Computational models of Parkinson’s disease (PD) play an important role in understanding the complex neural mechanisms underlying motor symptoms, such as tremor, and in assessing novel treatment interventions. According to the “finger-dimmer-switch (FDS)” theory, tremor originates within the basal ganglia–thalamo-cortical (BTC) network, but subsequently spreads to the cerebello–thalamo-cortical (CTC) network through excessive inter-network synchronization [1]. One approach to manage severe PD tremor is to use deep brain stimulation (DBS). Recently, a new non-invasive approach of vibrotactile coordinated reset (vCR) stimulation was proposed as an alternative to DBS [2]. Here, we aimed to explore how vCR affects tremor in a computational model.
Methods
Building on the FDS, we developed a multi-network FDS model encompassing 700 neurons across 11 regions within the BTC, CTC, and thalamic networks. By modulating dopaminergic synaptic connections, we simulated the transition from a healthy state to a Parkinsonian state. Further adjustments of self-inhibition in thalamic nuclei drove tremor onset and offset.
Results
As hypothesized, dopaminergic restoration significantly reduced tremor amplitude and reinforced the thalamus as a pivotal hub for stabilizing neuronal activity. Next, we incorporated a variant of the model featuring spike-timing-dependent plasticity (STDP) to investigate vCR stimulation, a noninvasive therapy that applies patterned tactile pulses to disrupt pathological network synchronization. In line with previous theoretical findings, our simulations showed that vCR not only attenuated excessive beta-band oscillations but also unlearned maladaptive plasticity via STDP, suggesting a broader corrective effect on dysfunctional motor circuitry than dopaminergic interventions alone.
Discussion
These findings highlight the capacity of in silico models to guide therapeutic strategies, demonstrating that vCR may be of use in managing PD symptoms. Consequently, the parameter specifications of vCR should be investigated further in theoretical and clinical studies, as it may reduce patients’ reliance on pharmacological and surgical treatments.



Acknowledgements
This study was funded by the EU project euSNN (MSCAITN-ETN H2020-860563) and Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—SFB 936—Project-ID 178316478-A1/Z3.
References
[1]https://doi.org/10.1016/j.nbd.2015.10.009
[2]https://doi.org/10.4103/1673-5374.329001


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P245: Computational modeling of neural signal disruptions to predict multiple sclerosis progression
Tuesday July 8, 2025 17:00 - 19:00 CEST
P245 Computational modeling of neural signal disruptions to predict multiple sclerosis progression

Vishnu Prathapan*1, Peter Eipert1, Markus Kipp2, Revathi Appali3,4, Oliver Schmitt1,2
1Medical School Hamburg University of Applied Sciences and Medical University, Am Kaiserkai 1, 20457, Hamburg, Germany
2Department of Anatomy, University of Rostock Gertrudenstr 9, 18057, Rostock, Germany
3Institute of General Electrical Engineering, University of Rostock, Albert-Einstein-Straße 2, 18059, Rostock, Germany

4Department of Aging of Individuals and Society, Interdisciplinary Faculty, University of Rostock, Universitätsplatz 1, 18055, Rostock, Germany
*vishnupratapan@gmail.com

Introduction
A computational approach is proposed to overcome the limitations of existing methods in predicting Multiple Sclerosis (MS) progression. MS is marked by myelin sheath disruption, impairing neuronal signal transmission and leading to neurodegeneration and functional decline. Predicting MS progression is challenging due to disease heterogeneity, limited longitudinal data, small sample sizes, and data inconsistencies. Current models rely on static biomarkers, failing to capture dynamic interactions between immune responses, neurodegeneration, and remyelination. Furthermore, the absence of personalized models and challenges in integrating multimodal data hinder early intervention and treatment optimization [1].
Methods
This study analyzes dynamic network changes, in response to localized disturbances, offering deeper insights into MS disease progression. The Izhikevich neuron model [2] is used for its computational efficiency, scalability, and ability to simulate diverse neuronal firing patterns relevant to specific brain regions. A myelin-based delay quotient adapted based on prior research [3, 4], models demyelination and remyelination effects observed in MS. The model is validated using varied conduction values, connection weights, and nodal lengths in a three-node configuration before extending to complex networks. Finally, interconnected neuronal modules representing distinct brain regions are simulated to replicate MS conditions.
Results
Signal propagation patterns are analyzed by altering myelin-based conduction delay parameters at specific nodes, with results compared against a control model. As expected, conduction deficits significantly impact network dynamics, illustrating how neuronal signaling adapts to disease-induced disruptions.
Discussion
This model could provide insights into MS progression by capturing evolving network disruptions when applied to a connectome. This computational approach holds promise as a foundation for predictive clinical tools, supporting early diagnosis and treatment strategies. This study offers a novel perspective on MS progression and potential therapeutic interventions by integrating dynamic network modelling with biological mechanisms.




Acknowledgements
The authors thank the University of Rostock, and the Medical School Hamburg University of Applied Sciences and Medical University for institutional support.
References
1. Prathapan, V., Eipert, P., Wigger, N., Kipp, M., Appali, R., & Schmitt, O. (2024). Modeling and simulation for prediction of multiple sclerosis progression: A review and perspective. Computers in Biology and Medicine, 108416.
2. Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on neural networks, 14(6), 1569-1572.
3. Kannan, V., Kiani, N. A., Piehl, F., & Tegner, J. (2017). A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis. Mathematical Biosciences, 289, 1-8.
4. https://doi.org/10.1371/journal.pcbi.1010507
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P246: A Computational Pipeline for Simulating Mouse Visual Cortex Microcircuits with Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P246 A Computational Pipeline for Simulating Mouse Visual Cortex Microcircuits with Spiking Neural Networks

Margherita Premi*1, Carlo Andrea Sartori1, Giancarlo Ferrigno1, Alessandra Pedrocchi1, Fiorenzo Artoni1, Alberto Antonietti1
1NeuroEngineering and Medical Robotics Laboratory - Department of Electronics, Information and Bioengineering - Politecnico di Milano, Milan, Italy
*E-mail: margherita.premi@polimi.it

Introduction
To integrate in vitro methodologies with in silico techniques, to investigate brain development and neural circuit interactions, we are preparing a computational pipeline to recreate Brain-on-Chip [1] systems with spiking neural networks. We leveraged the MICrONs dataset [2], which provides detailed reconstructions of neurons and astrocytes, with their connections, in a cubic millimeter of mouse visual cortex. The dataset presents significant challenges for computational modeling, particularly regarding quality and quantity of the automatically identified synapses. In this work, we establish a pipeline for transforming raw data into functional spiking neural networks that accurately represent cortical microcircuits.

Methods
The MICrONs dataset showed critical limitations: insufficient synapses and incorrect morphological attributions.
Two solutions were implemented:

● Synapse enhancement through cloning, generating a cluster of synapses placed in a sphere centered on the original synapse. The new synapses are validated through layer densities analyses [3].
● Improve synapse attribution using proofread astrocytes to establish connectivity patterns for non-proofread cells. For neurons, templates from proofread synapses will serve as models for non-proofread neurons.


The framework incorporates layer-specific connectivity with bidirectional astrocyte-neuron interactions. Comparisons were made with networks having the same neurons but different connectivity [4].

Results
Our synapse enhancement method generated clusters of 10 synapses placed in spheres with 10 μm radius, centered on original synapses. This successfully increased the overall synapses count while maintaining layer-specific patterns. A geometric approach was developed that defines minimum ellipsoidal domains containing all synapses belonging to each proofread astrocyte [5]. These ellipsoid representations served as spatial patterns for non-proofread astrocytes. For neurons, template-based attribution from proofread synapses increased the accuracy of connection identification.
Layer-specific connectivity analysis demonstrated that our reconstructed network successfully preserved the characteristic connection patterns across cortical layers (Fig1).
Discussion
This work addresses the identified limitations in using the MICrONs dataset. The developed methods correct connectivity data, enabling more accurate modeling of cortical microcircuits. The approach preserves connections and layer-specific organization unique to the MICrONs dataset. This network is then imported and simulated as a spiking neural model to generate biologically realistic activity. This framework also allows testing alternative network architectures (e.g., random, small-world, etc) compared to the accurate structural connectivity. Future work will refine astrocyte-neuron interaction models. These methodologies could then be applied to BoC experimental data, further validating the computational approaches.




Figure 1. Fig. 1: A. Enhanced synaptic density distribution across cortical depth. B. Astrocyte influence zones represented as ellipsoidal regions, each containing associated synapses. C. Functional connectivity diagram of the reconstructed microcircuit showing layer-specific connections and bidirectional signaling with astrocytes.
Acknowledgements
This work is part of the Extended Partnership "A multiscale integrated approach to the nervous system in health and disease" (MNESYS), funded by the European Union - Next Generation EU under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 1.4, Project PE00000006, CUP E63C22002170007, Spoke 3 "Neuronal Homeostasis and brain-environment interaction".
References
1. https://doi.org/10.1063/5.0121476
2. https://doi.org/10.1101/2021.07.28.454025
3. https://doi.org/10.1523/JNEUROSCI.0090-23.2023
4. https://doi.org/10.1101/2024.11.18.624135
5. https://github.com/rmsandu/Ellipsoid-Fit
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P247: Basic organization of spinal locomotor network derived from hindlimb design and locomotor demands
Tuesday July 8, 2025 17:00 - 19:00 CEST
P247 Basic organization of spinal locomotor network derived from hindlimb design and locomotor demands

Boris I. Prilutsky*1, S. Mohammadali Rahmati#1, Sergey N. Markin2, Natalia A. Shevtsova2, Alain Frigon3, Ilya A. Rybak2, Alexander N. Klishko#1

1School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
2Department of Neurobiology and Anatomy, Drexel University, Philadelphia, PA, USA
3Department of Pharmacology-Physiology, University de Sherbrooke, Sherbrooke, QC, Canada

*Email: boris.priutsky@ap.gatech.edu
#These authors contributed equally to this work


Introduction

One of the core principles of sensorimotor physiology is that the musculoskeletal system and its neural control have coevolved to satisfy behavioural demands. Therefore, it may be possible to derive the organization of the neural control of a motor behaviour (e.g., locomotion) from its mechanical demands and properties of the musculoskeletal system. The goals of this study were to (1) determine activity patterns of cat hindlimb muscles from locomotor demands of walking, (2) determine muscle synergies from the predicted and recorded muscle activity patterns and (3) propose a spinal locomotor network organization based on the derived muscle synergies.

Methods
We defined locomotor demands as patterns of resultant moments of force at hindlimb joints generating walking kinematics. To determine the locomotor demands, we computed the resultant muscle moments (using motion capture and methods of inverse dynamics) and muscle activations producing the moments and minimizing muscle fatigue using optimization. We then derived muscle synergies using the non-negative matrix factorization from the computed and recorded activities. We constructed a rhythm generation and pattern formation network of a spinal central pattern generator (CPG) from the derived muscle synergies and incorporated it into our neuromechanical model of spinal hindlimb locomotion.

Results
Locomotor activity patterns of hindlimb muscles obtained from hindlimb musculoskeletal properties and locomotor demands demonstrated a close agreement with the recorded activity patterns. Muscle synergies and their activation patterns derived from the predicted and measured hindlimb muscle activations were similar and consisted of two flexor and three extensor synergies. We used the revealed muscle synergies to construct a spinal CPG and incorporated it into a neuromechanical model of cat hindlimb locomotion. Computer simulations of locomotion demonstrated realistic locomotor mechanics and activity patterns.

Discussion
We demonstrated that hindlimb musculoskeletal properties and locomotor demands (desired resultant joint moments and minimization of muscle fatigue) can predict hindlimb muscle activation patterns, muscle synergies and a general organization of the CPG. The predicted and recorded muscle activations had the following features: (i) reciprocal activation of antagonists, (ii) concurrent activation of agonists and (iii) dependence of activity of two-joint muscles on functional demands. These muscle activation features are typical for many motor reflexes, automatic and highly skilled motor behaviours and suggest that all these behaviours minimize muscle fatigue and have a common organization of spinal circuitries.




Acknowledgements
This work was supported by US National Institutes of Health grants HD032571 and NS110550.
References
No references
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P248: The Neuron as a Self-Supervised Rectified Spectral Unit (ReSU)
Tuesday July 8, 2025 17:00 - 19:00 CEST
P248 The Neuron as a Self-Supervised Rectified Spectral Unit (ReSU)

Shanshan Qin*1, Joshua Pughe-Sanford1, Alex Genkin1,Pembe Gizem Özdil2, Philip Greengard1, Dmitri B. Chklovskii*1,3

1Center for Computational Neuroscience, Flatiron Institute, New York City, United States
2EDRS Doctoral Program, Swiss Federal Institute of Technology Lausanne(EPFL), Lausanne, Switzerland
3Neuroscience Institute, New York University Grossman School of Medicine, New York City, United States

*Email: qinss.pku@gmail.com; mitya@flatironinstitute.org


Introduction
Advances in synapse-level connectomics [1, 2, 3] and neuronal population activity imaging [4] necessitate neuronal models capable of integrating effectively with these rich datasets. Ideally, these models should offer greater biological realism and interpretability than rectified linear unit (ReLU) networks trained via error backpropagation [5], while avoiding the complexity and parameter intensity of detailed biophysical models[6]. Here, we propose a self-supervised multi-layer neuronal network employing identical learning rules across layers, progressively capturing more complex and abstract features, similar to the Drosophila visual system for which both neuronal responses and connectomics data are available.


Methods
We introduce the Rectified Spectral Unit (ReSU), a neuron model that rectifies projections of its input onto a singular vector of the whitened covariance matrix between past and future inputs. Representing singular vectors corresponding to the largest singular values in each layer effectively maximizes predictive informa- tion [7, 8]. We construct a two-layer ReSU network trained self-supervisedly on translating natural scenes. Inspired by the Drosophila visual system, each first-layer neuron receives input exclusively from one pixel [1], while second-layer neurons integrate inputs potentially from the entire first-layer population. Post-training, we compare the network’s neuronal responses and synaptic weights with empirical results.
Results
First-layer ReSU neurons learned temporal filters closely matching responses observed in Drosophila visual neurons: specifically, the first singular vector matched the linear filter of the L3 neuron, while the second singular vector corresponded to linear filters of L1 (ON) and L2 (OFF) neurons [9]. Additionally, these learned filters adapted their shapes according to signal-to-noise ratios, consistent with experimental find- ings [10]. Second-layer ReSUs aligned with the second singular vector developed motion-selective responses analogous to Drosophila T4 cells [11], and the synaptic weights learned by these neurons closely resembled those documented in T4 connectomic data [2] (Fig.1).
Discussion
ReSU networks exhibit significant advantages, including simplicity, robustness, interpretability, and biolog- ical plausibility. Rectification within ReSUs functions as a form of dynamic clustering, enabling transitions between distinct linear dynamical regimes. Our findings indicate that self-supervised multi-layer ReSU net- works trained on natural scenes faithfully reproduce critical aspects of biological sensory processing. Conse- quently, our model provides a promising foundation for large-scale, interpretable simulations of hierarchical sensory processing in biological brains.



Figure 1. (a)Neurons learn to predict future input and output the (rectified) latent variable. (b)The fly ON motion detection pathway. (c)Responses of neurons to stepped luminance stimulus. (d)T4 response to a moving grating. (e)Temporal filters adaptation to the input SNR. (f)Spatial filter obtained by SVD of L1-L3 output approximates the weights of synapses impinging onto T4a in Drosophila (b).
Acknowledgements
We thank Charles Epstein, Anirvan M. Sengupta and Jason Moore for helpful discussion.
References
[1]https://doi.org/10.1016/j.cub.2013.12.012
[2]https://doi.org/10.7554/eLife.24394
[3]https://doi.org/10.1016/j.cub.2023.09.021
[4]https://doi.org/10.7554/eLife.38173
[5]https://doi.org/10.1038/s41586-024-07939-3
[6]https://doi.org/10.1371/journal.pcbi.1006240
[7]https://doi.org/10.1109/ISIT.2006.261867
[8]Chechik, D., Globerson, A., Tishby, N., & Weiss,Y.(2005).Information Bottleneck for Gaussian Variables. Journal of Machine Learning Research, 6, 165–188
[9]https://doi.org/10.7554/eLife.74937
[10]https://doi.org/10.1098/rspb.1982.0085
[11]https://doi.org/10.1146/annurev-neuro-080422-111929
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P249: Brain-wide calcium imaging in zebrafish and generative network modelling reveal cell-level functional network properties of seizure susceptibility
Tuesday July 8, 2025 17:00 - 19:00 CEST
P249 Brain-wide calcium imaging in zebrafish and generative network modelling reveal cell-level functional network properties of seizure susceptibility

Wei Qin*1, Jessica Beevis2, Maya Wilde2, Sarah Stednitz1, Josh Arnold2, Itia Favre-Bulle2, Ellen Hoffman3, Ethan K. Scott1 


1 Department of Anatomy and Physiology, University of Melbourne, VIC, Australia
2 Queensland Brain Institute, University of Queensland, QLD, Australia
3 Department of Neuroscience, Yale School of Medicine, Yale University, New Haven, CT, USA


*Email: wei.qin@unimelb.edu.au

Introduction

Epilepsy causes recurrent seizures, but the exact mechanisms are still unclear. Traditional methods using data from primates or rodents struggle to resolve individual cell activity while tracking whole-network dynamics. Capturing the interactions of individual neurons within brain-wide networks could greatly enhance our understanding. Zebrafish, which share genetic and physiological similarities with humans, can exhibit seizure-like behaviors when exposed to drugs like PTZ, which blocks inhibitory GABAergic signaling and induces hyperexcitability [2]. Zebrafish and calcium imaging enable simultaneous in-vivo recording of neuronal activity across the brain at cellular resolution, offering a valuable approach to studying epilepsy [1].

Methods
In-vivo light-sheet calcium imaging was used to capture brain-wide cellular-resolution Calcium fluorescent data from wildtype andscn1lab(a gene implicated in Dravet Syndrome) mutant zebrafish larvae [3]. We conducted this under both baseline and PTZ conditions. Through network analyses, we statistically quantified differences in network topology and dynamics between the two genotypes. We focused on the network of active neuronal cells involved in ictogenesis at microscopic and macroscopic scales. Additionally, we developed a Generative Network Model [4] (GNM, Fig. A, Eq. 1) to explain the wiring principles governing both genotypes and the impact of thescn1labmutation on the brain-wide functional network.
Results
Our study reveals significant changes in brain network connectivity, showing thatscn1labmutations impact brain structure and function. The GNM at the cellular level explains the wiring principles governing the development of both genotypes (Fig. B) and the effects of PTZ on the brain-wide network. The model predicts genotypes and seizure severities for each fish before any seizure activities. This novel model also highlights brain regions associated with genotype differences (Fig. C, D), seizure severity, and overall network excitability. Combining experimental data and mathematical modeling, our approach offers a novel perspective on epileptogenesis mechanisms at a depth and resolution that traditional studies cannot achieve.
Discussion
Our study shows thatscn1lab-/-zebrafish larvae have significant brain morphology changes and increased PTZ-induced seizure susceptibility. Their network architecture mirrors PTZ-treated networks' wiring principles. Brain-wide, cellular-resolution activity data revealed notable alterations in baseline functional wiring, and PTZ administration affected network properties differently inscn1lab-/-and WT larvae, highlighting divergent neural responses. The GNM elucidated specific brain regions where the habenula, pallium, and cerebellum in Dravet Syndrome shows how multiple brain regions are affected, with the habenula influencing seizure initiation and the cerebellum regulating excitatory-inhibitory balance.




Figure 1. A. Generative network modelling (GNM) simulates wiring principles, evaluated by KS similarity. B. The model accurately classifies and predicts genotypes without relying on phenotypes. C. It assesses the contribution of each region to correct classification at each PTZ stage. D. The pallium and habenula are identified as the main contributors to the classification.
Acknowledgements
The authors would like to thank the UQBR aquatics team for maintenance of fish stocks. This project is supported by NHMRC, ARC, Simons Foundation and NIH (US).
References1. https://doi.org/10.1007/978-94-007-2888-2_40 2. https://doi.org/10.1371/journal.pone.0054166 3. https://doi.org/10.1093/braincomms/fcae135 4. Hills, T. T. (2024). Generative Network Models and Network Evolution. In: Behavioral Network Science: Language, Mind, and Society (pp. 46-60). Cambridge University Press.
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P250: A computational study of the influence of circadian adaptive mechanisms on signal processing in retinal circuits
Tuesday July 8, 2025 17:00 - 19:00 CEST
P250 A computational study of the influence of circadian adaptive mechanisms on signal processing in retinal circuits

Laetitia Raison-Aubry*1, Nange Jin2, Christophe P. Ribelayga2, Laure Buhry1

1Université de Lorraine, CNRS, LORIA, F-54000 Nancy, France
2Vision Sciences, University of Houston, Houston, Texas, United-Stats

*Email: laetitia.raison-aubry@loria.fr

Introduction

Rod-mediated signals reach retinal ganglion cells (RGCs) via three major pathways with distinct sensitivities and operating ranges [1,2,3]. These pathways interact with the cone pathway to ensure seamless processing over >9 log units of light intensity [1]. Gap junctions (GJs) between rod and cone terminals, the entry point of the secondary rod pathway (SRP), exhibit circadian plasticity--stronger at night--directly modulating rod signal flow into cones, and thereby SRP influence on retinal output [4,5]. However, experimentally isolating this effect is challenging due to the non-specificity of pharmacological interventions. Biophysical modeling provides a precise and reversible alternative to selectively manipulate rod/cone coupling while preserving other synaptic conductances. Using a recent mathematical model of a retinal microcircuit [6], we investigate how circadian modulation shapes rod and cone signal integration.
Methods
Our simulated network consists of ~40,000 retinal cells presynaptic to a single transient OFF alpha (tOFF a) RGC [6], arranged on a circular grid approximating the RGC’s receptive field [7] and interconnected with >100,000 synaptic connections, including chemical and electrical synapses. Each retinal cell type is implemented using conductance-based models that follow the Hodgkin-Huxley formalism. Light-induced photocurrent waveforms, whose amplitude and kinetics vary nonlinearly with stimulus intensity [8], serve as input stimuli [6].
Measurements of transjunctional conductance between adjacent mouse rod/cone pairs reveal dynamic changes, ranging over 1000 pS [4,9]. To simulate circadian modulation of rod/cone coupling, we define three states for the GJ channels conductance: uncoupled (0 pS), resting/dark-adapted (300 pS), and maximally coupled (1,200 pS), in line with experimental data [4,9]. Simulations are conducted using Brian 2 [10].
Results
To evaluate the impact of circadian adaptation on retinal signal processing and RGC light responses, we compare normalized intensity-response profiles of the tOFF aRGC across rod/cone coupling states. Stimulus intensity spans the activation threshold of the primary (0.01 R*/rod/s) to the tertiary (60 R*/rod/s) rod pathways [3]. We find that, relative to the SRP resting dark-adapted range (1-60 R*/rod/s) [3], inhibiting rod/cone coupling lowers the sensitivity threshold by ~0.5 log unit, while increasing coupling shifts the tOFF aRGC activation threshold ~1 log unit to the right.
Discussion
Our results support a circadian shift in the threshold and relative contribution of the SRP to the retinal output. This computational approach circumvents experimental limitations, allowing precise investigation of rod/cone coupling modulation. By clarifying mechanistic links between circadian modulation and retinal sensitivity, we demonstrate that our model can be used as a theoretical framework to reconcile previous experimental inconsistencies [5].



Acknowledgements
Research in the Ribelayga lab is supported by National Institutes of Health Grants EY032508, EY029408, and MH127343, National Institutes of Health Vision Core Grant P30EY007551, and The Foundation for Education and Research in Vision (FERV).
References
● https://doi.org/http://dx.doi.org/10.1016/S1350-9462(00)00031-8
● https://doi.org/https://doi.org/10.1146/annurev.physiol.67.031103.151256
● https://doi.org/https://doi.org/10.1126/sciadv.aba7232
● https://doi.org/https://doi.org/10.1126/sciadv.abm4491
● https://doi.org/10.1016/j.preteyeres.2022.101119
● https://doi.org/https://doi.org/10.1109/NER52421.2023.10123863
● https://doi.org/https://doi.org/10.1371/journal.pone.0180091
● https://doi.org/10.1113/jphysiol.2014.284919
● https://doi.org/https://doi.org/10.7554/eLife.73039
● https://doi.org/https://doi.org/10.7554/eLife.47314


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P251: Second-order neural field models capture the power spectrum and other nonlinear features of EEG signals in an interval timing task
Tuesday July 8, 2025 17:00 - 19:00 CEST
P251 Second-order neural field models capture the power spectrum and other nonlinear features of EEG signals in an interval timing task

IanD.Ramsey1andRodicaCurtu1,2,*

1DepartmentofMathematics,UniversityofIowa,IowaCity,USA
2IowaNeuroscienceInstitute,UniversityofIowa,IowaCity,USA


*Email:rodica-curtu@uiowa.edu


Introduction
Neuralfieldmodelsofferapowerfulframeworkforunderstandinglarge-scaleneuronaldynamics by encoding the underlying spatiotemporal processes as a system of integrodifferential equations. While early approaches modeled mean membrane potentials with a single quantity, modern methods [1] distinguish between postsynaptic and somatic potentials to provide a more nuanced description of synaptic interactions and their temporal dynamics. For this work, we consider the second-order neural field model (2ndNFM) introduced by Liley et. al. [2] and investigate how model parameters, governing both local activity and long-range connections, affect thetheta-band and alpha-bandpower of multi-leadEEGsignals as reported by [3].

Methods
We propose a novel method for parameter estimation, utilizing recent developments in the characterization of nonlinear stochastic oscillators [4]. We implement the method to study~4Hz rhythms (2-5 Hz band) of EEG recordings that were found to correlate with cognition inParkinson’s disease (PD) [3]. We extract relevant features (e.g., the Q-function; see [4]) from the EEG data of PD patients and of healthy subjects performing an interval timing task [3], according to the algorithm proposed by [5]. We analyze these nonlinear dynamical features for significant differences between the groups, then perform parameter estimation andextendedKalmanfilteranalysis in the 2ndNFM to obtain a model that captures their characteristics.

Results
We extended the results in [3] by analyzing the EEG signals recorded at the central leads C1 to C6. We found relevant changes in the 2-5 Hz frequency band activity for control and PD groups, like previous reports at Cz. Next, we parametrized the 2ndNFM to capture theattenuated2-5 HzrhythmsseeninPD patients, focusing on the functional coupling between a pair of leads placed on theleft (C3) and right (C4) brain hemispheres. We projected the dynamics of each 10-dimensionalsystem of differential equations perEEGchannel in 2ndNFM on a single variable via Q-function analysis [4, 5]. These projections were used for the model parameter estimation. The resulting 2ndNFM accurately fitted thepowerspectrumoftheEEGsignals at C3 and C4.
Discussion
To test the validity of 2ndNFMs for EEGs in an interval timing task, we performed parameter estimation using recordings at two central leads C3, C4. We also measured the performance of other methods [6] that assumed linearization of 2ndNFMs. We found them to fail to accurately fit the power spectrum of EEG signals due to nonlinear distortions. From our Kalman filter analysis, we detected anomalies in the subcortical and long-range inputs to the linear model that are inconsistent with previous assumptions of statistical independence. The nonlinear 2ndNFM parameterized on data-driven features guarantees an accurate fit for the power spectrum of EEG signals and could generate theoretical predictions.




Acknowledgements
This work was funded by The Stanley-UI Foundation Support Organization (R.Curtu) and the Erwin and Peggy Kleinfeld Endowment (I.Ramsey).
References
1. Cook, B.et al.(2022). Neural field models: a mathematical overview and unifying framework.Math. Neuro. and Appl., 2(2):1-67.
2. Liley, D.et al.(2002) A spatially continuous mean field theory of electrocortical activity.Network: Computation in Neural Syst., 13:67-113.
3. Singh, A.et al.(2021)https://doi.org/10.1038/s41531-021-00158-x
4. Perez-Cervera, A.et al.(2023)https://doi.org/10.1073/pnas.2303222120
5. Melland, P., & Curtu, R (2023)https://doi.org/10.1523/JNEUROSCI.1531-22.2023
6.Hartoyo, A.,et al.(2019)https://doi.org/10.1371/journal.pcbi.1006694
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P252: Developments around MOOSE for modeling in Systems Biology and Neuroscience
Tuesday July 8, 2025 17:00 - 19:00 CEST
P252 Developments around MOOSE for modeling in Systems Biology and Neuroscience

Subhasis Ray*1,2, G.V. Harsharani3,4, Anal Kumar3, Ashish Anshuman3, Parita Mehta3, Jayesh Poojari3, Deepa SM3, and Upinder S. Bhalla3,4


1CHINTA, TCG CREST, Kolkata, India
2IAI, TCG CREST, Kolkata, India
3NCBS-TIFR, Bangalore, India
4Centre for Brain and Mind, NCBS-TIFR, Bangalore, India
*Email: ray.subhasis@gmail.com


Introduction

Public databases for neuroscience, including those for connectomes, cell morphologies, and electrophysiological recordings, are accelerating data-driven neuroscience. Tools supporting such databases and standard formats for model and data exchange are critical for maximizing the utility of these resources. MOOSE, the Multiscale Object Oriented Simulation Environment [1], is a stable software for computational modeling and simulation in Systems Biology and Neuroscience. It emphasizes models that span molecular and electrical signaling from synapses to networks. As MOOSE-development emphasized standards and interoperability early on, it is well placed to facilitate the development of biological neural models utilising public model- and data-repositories.
Methods
The core of MOOSE is written in C++ for speed, while its Python API allows integration with the Python ecosystem. Extensive documentation is supplemented with a wide range of tutorials using Python graphics and browser-based 3-D graphics. We use existing Python modules for various model and data description formats to support them in MOOSE, and web frameworks to utilize public APIs of the neuroscience databases. We actively conduct outreach activities and user-research to enhance the user experience and documentation of MOOSE, and workshops for training students and researchers on modeling and simulation in Systems Biology and Computational Neuroscience.
Results
MOOSE covers multiple scales of modeling, from chemical reactions and signaling pathways to large biological neural networks. Currently it supports standard formats like SBML and NeuroML for model description, SWC for morphology, and NSDF for simulated data. It includes Python tools to easily create multiscale models from a library of model components. We are also developing clients for accessing public repositories of model and data, enabling users to seamlessly integrate model components from such sources into their composite models.
Discussion
A major goal of MOOSE is to make biological modeling accessible to students and researchers from diverse backgrounds. Users can seamlessly incorporate published and curated models in their simulation experiments using software tools developed around MOOSE. The new developments in the MOOSE ecosystem will help accelerate data-driven research in Systems Biology and Neuroscience.



Acknowledgements
We thank the Kavli Foundation, and DBT and DST of the Govt. of India, for supporting MOOSE development. NCBS/TIFR receives support from the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4006.


References
● https://doi.org/10.3389/neuro.11.006.2008


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P253: Higher-Threshold Neurons Boost Information Encoding in Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P253 Higher-Threshold Neurons Boost Information Encoding in Spiking Neural Networks

Farhad Razi*1, Fleur Zeldenrust1

1Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands

*Email: farhad.razi@donders.ru.nl


Introduction
The brain exhibits remarkable neural heterogeneity. Studies suggest this boosts performing sequential tasks [1], efficient coding [2], and working memory [3] in artificial neural networks. Specifically, heterogeneity in spike thresholds is shown to improve information encoding by reducing trial-to-trial variability in network responses [4]. However, the mechanisms behind this reduced variability remain unclear. We propose that spike threshold heterogeneity introduces variability in neuronal firing sensitivity, with higher-threshold neurons significantly contributing to enhanced information encoding and reduced variability. Our findings advance understanding of the brain's computational capacities.
Methods
A recurrent spiking network with leaky integrate-and-fire neurons was used (Fig. 1A). Heterogeneity was introduced by varying the width of uniform spike-threshold distributions. The distribution of firing rates was assessed. The dimensionality of network activity was quantified using the participation ratio. An input was applied to a subset of the network. A linear decoder was trained to decode the input from spiking responses of the stimulated subset and the whole network. Information encoding was quantified by comparing root mean square error (RMSE) between the decoded and original input. The decoder, trained on the original input, was used to decode a novel input from network responses, evaluating its generalization to unfamiliar inputs.
Results
Increasing spike threshold heterogeneity enhances network information encoding. Heterogeneity increases firing rate variability and participation ratio, indicating a higher dimensionality of network activity (Fig. 1B), consistent with previous studies [4]. Decoding performance improves with heterogeneity, particularly when using the whole network (Fig. 1C). This enhanced decoding performance is largely carried by neurons that have higher spike thresholds (Fig. 1D). Notably, decoders trained on heterogeneous networks show superior generalization performance on a novel input (Fig. 1E). These results support our hypothesis that heterogeneity yields more robust network-wide information encoding capacities via higher-threshold neurons.
Discussion
Our results highlight heterogeneity's crucial role in the brain's capacity in information encoding. However, heterogeneity may not always be beneficial. Improved encoding could potentially consume neural resources, possibly hindering certain task performances. Future work will investigate how heterogeneity impacts networks trained for specific prediction and decoding tasks to reveal the trade-off between information encoding and processing, identifying task-dependent optimal ranges for the neural heterogeneity. Our findings offer insights into the brain function and can guide the development of efficient, task-adaptive neuromorphic systems, potentially bridging the gap between biological and artificial neural networks.





Figure 1. A, Computational experimental design. B, Network characteristics and heterogeneity. C, RMSE between decoded and original input decreases with heterogeneity. D, Neurons with higher spike thresholds possess larger decoder weights, indicating their heightened role in encoding. E, Decoder generalization on a novel input improves with increasing heterogeneity.
Acknowledgements
This work was supported by Dutch Research Council, NWO Vidi grant VI.Vidi.213. 137.
References
[1]https://doi.org/10.1038/s41467-021-26022-3
[2]https://doi.org/10.1371/journal.pcbi.1008673
[3]https://doi.org/10.1523/JNEUROSCI.1641-13.2013
[4]https://doi.org/10.1073/pnas.2311885121
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P254: Synaptic Population Shapes Form Fingerprints of Brain Areas, Organize Along a Rostro-Caudal Axis
Tuesday July 8, 2025 17:00 - 19:00 CEST
P254 Synaptic Population Shapes Form Fingerprints of Brain Areas, Organize Along a Rostro-Caudal Axis

Martin Rehn*1, Erik Fransén1,2,3

1School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.
2Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden.
3Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden.


*Email: rehn@kth.se
Introduction
Synaptic creation, modification, and removal underpin learning in the brain, balanced against homeostasis. The resultant distributions of synaptic sizes may reflect these processes.
We suggest that theshapesof distributions have biological relevance. This is suggested both empirically, by the prevalence of skewed distributions [1,2] and functionally, as large synapses may have particular importance [3]. We find a low-dimensional descriptor for such shapes. We proceed to explore and contrast brain regions at various spatial scales, and across the lifespan, using our proposed descriptor.Methods
We studied a measure of PSD95, a key postsynaptic protein [4–6] in parasagittal sections of mouse brains [7]. PSD95 correlates with EPSP amplitudes [8–10], spine volumes and synaptic face areas [11]. In contrast to previous work [7,12] we chose a scalar measure per synapse and considered synaptic populations.
We analyzed multiple anatomical levels, in ages ranging from one week to 18 months. Per 100 μm tiles, we computed a profile of the synaptic size distribution comprised of the arithmetic mean, normalized width, robust skewness, robust kurtosis, and synaptic density. Then we applied clustering methods and built a bi-linear model to compactly model variability.Results
Fig. 1 shows three parts of the profile descriptor. All five components differ between brain areas, and also by age. The upper tails of the distributions vary from relatively heavy-tailed (HT) regions, also more skewed, to less heavy-tailed (LT) ones. This is amplified in older animals. Regions in the hindbrain and midbrain tend to the HT-type; forebrain regions, in particular the cortex, and the hippocampus, to the LT-type. Mean intensity and spatial density follow the opposite trend. We thus found that the profiles seem to principally trace the anterior-posterior neuroaxis; our bi-linear and clustering models concur. The structure also parallells gene expression data [13].Discussion
We propose to analyze local brain regions using a fingerprint, a “distronomical signature”, based solely on the collective properties of synaptic distributions. This correlates with known anatomy and gene expressions, but exhibits striking differences in local heterogeneity (Fig. 1), and a rather dramatic evolution over the lifespan. We argue that this reflects underlying processes central to brain function, and that it may serve as a novel tool to characterize regular and perhaps anomalous structure in the brain.
Figure 1. Fig. 1: Global distributional structure. False color representation of three statistical moments, in a three month old individual. Tile size 25 µm x 25 µm. The tiles are color coded by arithmetic mean (red), normalized width (green) and robust kurtosis (blue), clipped at the 5th and 95th percentiles. Anatomical regions can be readily identified.
Acknowledgements
The Swedish Research Council grant no. 2022-01079.
References
1. doi:10.1371/journal.pbio.0030068
2. doi:10.1038/nrn3687
3. doi:10.1016/j.celrep.2022.111383
4. doi:10.1038/24790
5. doi:10.1523/JNEUROSCI.4457-06.2007
6. doi:10.1113/jphysiol.2008.163469
7. doi:10.1126/science.aba3163
8. doi:10.1016/S0092-8674(02)00683-9
9. doi:10.1073/pnas.0608492103
10. doi:10.1016/j.celrep.2021.109972
11. doi:10.1038/s41598-020-70859-5
12. doi:10.1016/j.neuron.2018.07.007
13. doi:10.1126/sciadv.abb3446
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P255: Quantifying Negative Feedback Inhibition in Epilepsy to Assess Excitability
Tuesday July 8, 2025 17:00 - 19:00 CEST
P255 Quantifying Negative Feedback Inhibition in Epilepsy to Assess Excitability

Thomas J Richner1, Nicholas Gregg1, Raunak Singh1, Keith Starnes1, Dora Hermes2, Jamie J Van Gompel3, Gregory A Worrell1,Brian N Lundstrom*1

1. Department of Neurology, Mayo Clinic, Rochester, MN, USA
2. Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
3. Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, USA

*Email:lundstrom.brian@mayo.edu

Introduction
Normal cortical function depends on precisely regulated excitability, which is controlled by a balance of excitation and negative feedback inhibition. Negative feedback inhibition mechanisms, such as spike frequency adaptation (SFA) and short-term synaptic depression (STD), act over multiple timescales to reduce excitability. However, negative feedback inhibition is often difficult to quantify and neglected in neuroscience experiments. We’ve developed a framework for quantifying multiple timescale negative feedback inhibition and are applying it to epilepsy patients undergoing invasive EEG epilepsy monitoring. We also modeled negative feedback inhibition to understand how SFA and STD affect EEG signals.

Methods
Novel electrical stimulation waveforms were delivered to epilepsy patients undergoing stereotactic EEG monitoring. Sinusoidally modulated pulse trains were delivered to cortical sites, varying the envelope period between 2 and 10 seconds (5 Hz carrier frequency). Cortico-cortical evoked potentials (CCEPs) were recorded from nearby electrodes. Negative feedback inhibition was assessed by analyzing the phase difference between the stimulus and the CCEP responses, analogous to our previous research with single unit (1). We created a network model with SFA and STD by extending previous modeling (2,3). We investigated the interaction between SFA and STD using spectral analysis and their stabilizing properties by computing the largest Lyapunov exponent over a range of connectivities.

Results
Across participants, the cortical evoked response showed phase advances of approximately 5–30 degrees across modulation frequencies, consistent with adaptation on multiple timescales. These phase leads appear to be more pronounced in the clinically identified seizure onset zone, suggesting that compensatory negative feedback inhibition is upregulated. A phase lead at a particular frequency is consistent with adaptation (or dampening) at that timescale (2). Our network models showed a nonlinear interaction between SFA and STD, similar to other models (3), which may help maintain a homeostatic level of activity. Further, we found SFA and STD stabilized a wide range of networks onto the edge of chaos.

Discussion

Results suggest that neural mechanisms of feedback inhibition may be assessed at the level of EEG using stimulation-based methods, like sine-modulated CCEPs, or passive methods, such as by comparing changes in spectrograms. We find evidence of multiple timescale adaptation at the level of CCEPs, which may be one way the brain maintains stability. Our computational model suggests that SFA and STD can dynamically rebalance a wide range of networks and that these kinds of mechanism may result in telltale signs on spectrograms.



Acknowledgements
Work was supported by NINDS R01NS129622.
References
References
1.Lundstrom, B. N., Higgs, M. H., Spain, W. J., Fairhall, A. L. (2008). Fractional differentiation by neocortical pyramidal neurons. Nat Neurosci.https://doi.org/10.1038/nn.2212
2.Lundstrom, B. N. (2015). Modeling multiple time scale firing rate adaptation in a neural network of local field potentials. Journal of Comp Neurosci. https://doi.org/10.1007/s10827-014-0536-2

3.Lundstrom, B. N., Richner, T. J. (2023). Neural adaptation and fractional dynamics as a window to underlying neural excitability. PLOS Comp Bio. https://doi.org/10.1371/journal.pcbi.1010527

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P256: Structural and evolutionary insights of the neuropeptide connectome of Caenorhabditis species
Tuesday July 8, 2025 17:00 - 19:00 CEST
P256 Structural and evolutionary insights of the neuropeptide connectome of Caenorhabditis species

Lidia Ripoll-Sánchez*1,2,Itai A. Toker4,Oliver Hobert4,Isabel Beets3,Petra E. Vértes1,2,5,William R. Schafer1,3,5

1MRC Laboratory of Molecular Biology, Cambridge, UK
2Department of Psychiatry, Cambridge University, Cambridge, UK
3Department of Biology, KU Leuven, Leuven, Belgium
4Department of Biological Sciences, Howard Hughes Medical Institute, Columbia University, New York, NY, USA
5co-senior authors

*Email: lsanchez@mrc-lmb.cam.ac.uk


Introduction

Neuropeptides modulate synaptically wired neuronal circuits. This modulation is critical to nervous system function, yet little is known about the structure and function of extrasynaptic signalling networks at a whole-organism level and how that is maintained over evolution.


Methods

To this end, we used single neuron gene expression [1] and deorphanisation data for neuropeptide-activated G-protein coupled receptors [2] to generate a connectome of 92 neuropeptide signalling networks inC. elegans[3].This network defined a connection when the sending neuron expressed a neuropeptide, the receiving neuron expressed the cognate receptor, and both neurons extended overlapping processes.We then used graph theory and machine learning methods to characterise its structural features.




Results

Our analysis on the connectivity pattern revealed a mesoscale structure for the core of the network, splitting it in three groups of neurons that act as highly controlled functional hubs. Notably inside these hubs, we identified a group of neurons that seem to be morphologically and biochemically adapted for neuropeptidergic communication. Furthermore, the co-expression pattern identified autocrine neuropeptidergic connections that may modulate locomotion control and evolutionary conserved intracellular neuropeptide signalling networks that could act as homeostatic regulators of the neuropeptidergic network. This network has a higher connection density than the synaptic and gap junction ones, connecting non-synaptically connected neurons [3].


Discussion

These findings challenge the idea that neuronal communication is primarily synaptic, revealing a dense, decentralised neuropeptide network with functional and structural roles. Additionally, conserved signalling patterns acrossCaenorhabditisspecies highlight the evolutionary significance of neuropeptide connectivity[4].Weexpect that this newly mapped neuropeptide connectomes, their analysis and the interactive website we developed to explore them (nemamod.org) will serve as a prototype for other animals and provide new insight into the structure of neuromodulatory networks in larger brains.





Acknowledgements
This work was funded by the Howard Hughes Medical Institute and the NIH grants RO1 NS039996 & NIH RO1 NS100547 (to OH); the Medical Research Council grant MC-A023-5PB91 (to WRS); a Medical Research Council PhD fellowship (to LRS); the MQ Transforming Mental Health grant MGF17_24 (to PEV); and a postdoctoral fellowship from the Evelyn Gruss Lipper charitable foundation (to IAT).
References
1.https://doi.org/10.1016/j.cell.2021.06.023
2.https://doi.org/10.1016/j.celrep.2023.113058
3.https://doi.org/10.1016/j.neuron.2023.09.043

4.https://doi.org/10.1101/2024.11.23.624988
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P257: Basket cell computational modeling predicts signal filtering of Purkinje cell responses
Tuesday July 8, 2025 17:00 - 19:00 CEST
P257 Basket cell computational modeling predicts signal filtering of Purkinje cell responses

Martina F. Rizza*1,Stefano Masoli1, Teresa Soda1, Francesca Prestori1, Egidio D’Angelo1,2

1Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
2Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: martinafrancesca.rizza@unipv.it


Introduction
Cerebellar basket cells (BC), located in the bottom 1/3 of the molecular layer (ML), play an important role in controlling the activity of Purkinje cells (PC) via inhibitory synaptic transmission. BCs receive excitatory synaptic inputs from parallel fibers (pf) and transmit inhibitory synaptic inputs to BCs and PCs. We reconstructed a multi-compartmental biophysically realistic BC model in Python-NEURON to investigate BC intrinsic and synaptic electrophysiological properties and their impact on PC model responses [1,2,3]. The SC model [4] was included to reconstruct a ML microcircuit. Simulationspredicted that BC and SC operate in tandem, setting the frequency band of PC transmission through the regulation of the PC frequency/response curve.


Methods
Starting from morphological reconstructions taken from cerebellar tissue and patch-clamp recordings, we implemented conductance-basedmulti-compartmental models of BCwith Python 3 and NEURON 8.2[5].The model maximum ionic conductances were tuned to match the firing pattern revealed by whole-cell patch-clamp recordings from mice cerebellar slices.Mouse SC[4] and BC models were connected with a multi-compartmental mouse PC model[1,2,3] to test their impact when stimulated by excitatory synaptic inputs.Simulations were performed on an AMD Threadripper 7980X 64 cores using fixed time step (0.025ms) and temperature set to 32°.

Results
Simulations reproduced whole-cell patch-clamp experimental results, showing autorhythmic activity, an almost linear I/O relationship to positive current injections, pauses generated after positive current injections,sag after the negative current injections,AMPA and NMDA receptors-mediated excitatory postsynaptic responses following pf inputs.SC and BCsimulations showed thefiltering properties on PC activity, highlighting thatBCs modulate low-frequency PC discharges through somatic GABAergic synapses, while SCs act on high-frequency responses through dendritic GABAergic synapses.

Discussion
BC modeling reproduced the cellular intrinsic excitability and the synaptic activity, investigating thefrequency‑dependent short‑term dynamics at pf-BC synapses and the frequency-dependenceof BC input–output gain functions. Simulations predicted BC and SC filtering of PC responses, showing thatthe intensity and bandwidth of ML filtering is modulated by the number of active synapses between pfs-SCs-PCs and pfs-BCs-PCs. SCs and BCs emerge as critical elements controlling cerebellar processing in time and frequency domains. Tuning of transmission bandwidth and delay through specific membrane and synaptic mechanisms contributes to explain the role of SCs and BCs in motor learning and control.






Acknowledgements
This project/research received funding from the European Union’s Horizon Europe Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Framework Partnership Agreement No. 650003 (HBP FPA).
References
1.https://doi.org/10.3389/fncel.2015.00047
2.https://doi.org/10.1038/s42003-023-05689-y
3.https://doi.org/10.3389/fncel.2017.00278
4.https://doi.org/10.1038/s41598-021-83209-w
5.https://doi: 10.3389/neuro.11.001.2009




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P258: Reconstruction and simulation of the mouse cerebellar declive: an atlas-based approach
Tuesday July 8, 2025 17:00 - 19:00 CEST
P258 Reconstruction and simulation of the mouse cerebellar declive: an atlas-based approach

Dimitri Rodarie1*, Dianela Osorio1, Egidio D’Angelo1,2, Claudia Casellato1



1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
2Brain Connectivity Center IRCCS Mondino Foundation, Pavia, Italy
* Email:dimitri.rodarie@unipv.it


Introduction

We aim to reconstruct and simulate atlas-mappedmousecerebellar regions, capturing the relationship between structure, dynamics, and function. Numerous experiments on rodents and humans show that the declive region (lobule VI) plays a relevant role in many functions including motor, cognitive, emotional, and social tasks [1].
We present here a pipeline to reconstruct the mouse declive, based on the Blue Brain Cell Atlas (BBCA) model [2] and the Brain Scaffold Builder (BSB) tool [3]. With this pipeline, we could estimate the specific densities of each cell type. With the BSB we placed, oriented, and connected the neurons. The output of this pipeline is a circuit that can be simulated and validated against experimental findings.

Methods
We built a 3D model of the mouse declive (Fig. 1), based on the BBCA pipeline (Fig. 1DE), which we extended with the Purkinje layer at the boundary between granular and molecular layers (Fig. 1A). We placed cells based on the atlas and regional densities [4,5] and proposed a new strategy to place Purkinje layer cells based on linear density [6] (Fig. 1F). To connect the cells, we computed the orientations and depth [2] of each morphology (Fig. 1BC). These fields are used to bend the cells’ neurites following the declive curvature (Figure 1G). We applied voxel intersection on these bent cells with synaptic in- and out-degree ratios [3]. Finally, we assigned point-neuron electrical parameters to each cell and connection [7].
Results
We combined the workflows of the BBCA and the BSB into a single pipeline. This includes tools to align experimental data into an atlas, to reconstruct and to simulate cerebellar circuits. This allowed us to produce the most detailed model of the mouse declive.
We obtained new densities for each cell type of the cerebellum. Our model shows cell composition differences between cerebellar regions. We also estimated the impact of the declive shape on its local connectivity, by comparing different sub-part of the region with respect to a cubic canonical circuit.
Finally, we simulated our circuit using the BSB interfacing with the NEST simulator [8] in resting state and created a paradigm to reproduce fear conditioning experiments on mice.
Discussion
By combining the two pipelines to reconstruct our circuit, we are now able to leverage atlas data to estimate the spatial cellular composition in the cerebellum. The atlas registration will also facilitate the embedding of our model into larger brain circuits [9].
We also found that the cerebellum's highly parceled layers, its curved shape and its position within the mouse atlas make our model very sensitive to artifacts in the data (Figure 1DE). The model will be refined as more data become available.
We plan to reconstruct different sub regions of the cerebellar cortex to compare their structure and function. Our future work will also involve mapping the different types of Purkinje neurons based on the “zebrin stripes” [10].



Figure 1. Fig 1: Reconstruction pipeline. A. Annotations shown in colors over the Nissl volume. B. Orientation field showing the local axons’ main axis. Colors represent the vectors’ norm. C. Distance to the outside border, following the orientation field. D. E. Neuron and inhibitory neuron density. F. Neuron positions displayed over annotations. G. Scaled and bent Purkinje morphologies over annotations.
Acknowledgements
Funding:

European Union's Horizon 2020 research and innovation program - Marie Sklodowska-Curie - grant 956414 Cerebellum and Emotional Networks

Virtual Brain Twin Project - European Union's Research and Innovation Program Horizon Europe - grant 101137289

National Centre for HPC, Big Data and Quantum Computing - CN00000013 PNRR MUR – M4C2 – Fund 1.4 - decree n. 3138 16 december 2021




References
1.https://doi.org/10.3389/fnsys.2023.1185752
2.https://doi.org/10.1371/journal.pcbi.1010739
3.https://doi.org/10.1038/s42003-022-04213-y
4.https://doi.org/10.1007/s00429-013-0531-9
5.https://doi.org/10.1523/JNEUROSCI.20-05-01837.2000
6.https://doi.org/10.1038/s41593-022-01057-x
7.https://doi.org/10.3389/fncom.2019.00068
8.https://doi.org/10.4249/scholarpedia.1430
9.https://doi.org/10.1523/ENEURO.0111-17.201710.https://doi.org/10.1038/nrn269


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P259: Local multi-gridding for detailed morphology, spines and synapses
Tuesday July 8, 2025 17:00 - 19:00 CEST
P259 Local multi-gridding for detailed morphology, spines and synapses

Cecilia Romaro*1, William W. Lytton2,3,4,5, Robert A. McDougal1,6,7,,8

1Department of Biostatistics, Yale School of Public Health, New Haven, CT, United States
2Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, New York, United States
3Department of Neurology, SUNY Downstate Health Sciences University, Brooklyn, New York, United States
4Department of Neurology, Kings County Hospital Center, Brooklyn, New York, United States
5The Robert F. Furchgott Center for Neural and Behavioral Science, Brooklyn, New York, United States
6Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, CT, United States
7Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT, United States
8Wu Tsai Institute, Yale University, New Haven, CT, United States

*Email: cecilia.romaro@yale.edu

Introduction

The NEURON simulator (https://nrn.readthedocs.io) is one of the most widely-used tools for simulating biophysically detailed neurons and networks [1]. In addition to electrophysiology simulations, NEURON has long supported multi-scale models incorporating intra- and extracellular chemical reaction-diffusion [2], in both 1D and 3D. To accurately simulate whole cells in 3D requires capturing large regions like somas and small regions like spines. We demonstrate an algorithm in NEURON for achieving high-quality results with reasonable computational cost through local multi-gridding.


Methods
We extended NEURON's reaction-diffusion Region specification to support per-Section grid size specification. Sections with different grid sizes are independently discretized using NEURON's standard voxelization algorithm [3]. Small voxels are removed and/or added to produce a join with minimal voxel overlap. Neighboring voxels of different sizes are connected to allow molecules to diffuse between the grids. For ease of use, the model specification is in Python; for performance, coupling between grids and all simulation is done in C++.


Results
Multigrid-voxelization overhead due to the editing and alignment of the grids is small but measurable. Mass is conserved when diffusing across the grid-size boundary, however subtle differences may arise in numerical results due to the changes in volume and surface-area voxel-size-dependent estimates; implications for assessing convergence are discussed. Accuracy and performance are assessed for both simplified morphologies and detailed cell morphologies from NeuroMorpho.Org; initialization and simulation are necessarily slower than for the coarse grid, (but not for the finest grid) however the time cost and accuracy improvements are highly dependent on the problem.

Discussion
Using multiple grid sizes for 3D reaction-diffusion simulation allows increased accuracy in small parts of the morphology or in regions of interest with moderate compute overhead. This approach preserves the regular sampling and easy convergence testing of NEURON's finite-volume integration. This numerical simulation method pairs naturally with ongoing work to import high-resolution neuron spine morphologies into NEURON models, with the spine and the dendrites simulated using different grids. Carefully chosen grid sizes have the potential to enable high-fidelity simulations combining chemical, electrical, and network activity with modest compute resources.




Acknowledgements
This research was funded by the National Institute of Mental Health, National Institutes of Health, grant number R01 MH086638. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
References
1.https://doi.org/10.3389/fninf.2022.884046
2.https://doi.org/10.3389/fninf.2022.847108
3.https://doi.org/10.1016/j.jneumeth.2013.09.011


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P260: Oscillatory activity patterns in a detailed model of the prefrontal cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P260 Oscillatory activity patterns in a detailed model of the prefrontal cortex

Antonio C. Roque*1, Marcelo R. S. Rempel1

1Department of Physics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP, Brazil

*Email: antonior@usp.br

Introduction

The prefrontal cortex (PFC) is a crucial brain region involved in executive functions, behavioral control, and affective modulation. PFC neurons exhibit distinct activity states, including asynchronous irregular firing during wakefulness and slow oscillations with UP/DOWN state transitions during deep sleep and anesthesia [1]. Previous computational models have investigated the mechanisms underlying these states, but many focus on general cortical networks or sensory cortices. This study aims to replicate and extend a detailed PFC network model to explore the conditions leading to oscillatory activity and UP/DOWN transitions.

Methods
A previously published PFC model [2] was reimplemented using Brian2, preserving its original parameters to ensure replication accuracy. Simulations were conducted to compare the original model with three parameter-modified variants. Variant A increased recurrent excitation, inducing hyperactive network fluctuations. Variant B intensified synaptic excitation, resulting in epileptiform-like bursting. Variant C introduced adaptation currents and stochastic external inputs, leading to oscillatory UP/DOWN transitions. Network activity was analyzed through spike raster plots, local field potential (LFP) estimation, and membrane potential dynamics .
Results
The original model exhibited asynchronous irregular firing, consistent with physiological observations of cortical activity under moderate external drive. Variants A and B disrupted excitation-inhibition balance, promoting excessive synchrony. Variant C successfully generated low-frequency oscillations (~8 Hz) with UP/DOWN transitions, influenced by adaptive currents and external noise, mirroring previous findings in cortical dynamics.
Discussion
The results align with established models of cortical bistability and highlight the interplay between adaptation and external drive in shaping oscillatory states.




Acknowledgements
This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). ACR is partially supported by a CNPq fellowship (grant 303359/2022-6).
References
[1] Wang, X. J. (2010). Neurophysiological and computational principles of cortical rhythms in cognition.Physiological Reviews, 90(3), 1195-1268. https://doi.org/10.1152/physrev.00035.2008.
[2] Hass, J., Hertäg, L., & Durstewitz, D. (2016). A detailed data-driven network model of prefrontal cortex reproduces key features ofin vivoactivity.PLoS Computational Biology. 12, e1004930. https://doi.org/10.1371/journal.pcbi.1004930.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P261: Properteis of intermittent synchrony of gamma rhythm oscillations
Tuesday July 8, 2025 17:00 - 19:00 CEST
P261 Properteis of intermittent synchrony of gamma rhythm oscillations

Leonid L Rubchinsky*1,2,Quynh-Anh Nguyen3

1Department of Mathematical Sciences, Indiana University Indianapolis, Indianapolis, IN, USA
2Stark Neurosciences Research Institute, Indiana University School of Medicine, Indianapolis, IN, US
3Department of Mathematical Sciences, University of Indianapolis, Indianapolis, IN, USA


*Email:lrubchin@iu.edu
Introduction

Synchronization of oscillations of neural activity is implied to be important for a variety of neural phenomena. Most of the studies consider time-averaged measures of synchrony such as phase-locking strength. However, if two signals have some degree of phase locking, it is possible to explore synchrony properties beyond the average phase-locking strength and to study whether the oscillations are close to synchronous state or not at any time (during any oscillatory cycle) [1]. Thus, it is possible to characterize temporal patterning of neural synchrony (e.g. many short desynchronizations vs a few long desynchronizations), which may vary independently of the average synchrony strength [2].


Methods
To study how the properties of the temporal variability of synchronized oscillations are affected by the network properties, we consider populations of model neurons exhibiting pyramidal-interneuron gamma rhythm and apply the same time-series analysis techniques for characterization of temporal synchrony patterning as the ones used in the earlier experimental studies [1,2].


Results
Variation of synaptic strength affects the strength of time-average phase-locking between the networks. However, this variation of synaptic strength also affects the temporal patterning of the synchrony, altering the distribution of the durations of the desynchronizations (similar to the earlier studies of minimal models [3,4]). While synaptic strength affects both synchrony level and its temporal patterning, these effects can be independent of each other: the former can be practically fixed, while the latter may vary. Furthermore, the impacts of the long-range and local synapses tend to be the opposite. Shortening the desynchronization durations tends to be achieved with weakening of long-range synapses and strengthening local synapses.


Discussion
Changes in the temporal patterning of the synchronization of oscillations may potentially affect how the networks are processing the external signals [4,5]. Frequent vs. rare switching between synchronized and desynchronized dynamics may lead to functionally different outcomes even though the average synchrony level between the networks is the same. Synaptic strength changes have thus potential to affect the responses of the neural circuits not only via the average synchrony strength, but also via the more subtle changes, such as altering the temporal patterning of synchronized dynamics, pointing to the potential importance of studying these phenomena.




Acknowledgements
References
1. Ahn, S., & Rubchinsky, L. L. (2013). Chaos,23, 013138. https://doi.org/10.1063/1.4794793

2. Ahn, S., Zauber, S. E., Witt, T., et al. (2018).Clinical Neurophysiology, 129, 842-844. https://doi.org/10.1016/j.clinph.2018.01.063

3. Ahn, S., & Rubchinsky, L. L. (2017).Frontiers in Computational Neuroscience, 11, 44. https://doi.org/10.3389/fncom.2017.00044

4. Nguyen, Q. A., & Rubchinsky, L. L. (2021).Chaos, 31, 043133. https://doi.org/10.1063/5.0042451

5. Nguyen, Q. A., & Rubchinsky, L. L. (2024).Cognitive Neurodynamics, 18, 3821-837. https://doi.org/10.1007/s11571-024-10150-9
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P262: Mechanisms of bistability in spinal motoneurons and its regulation
Tuesday July 8, 2025 17:00 - 19:00 CEST
P262 Mechanisms of bistability in spinal motoneurons and its regulation

Ilya A. Rybak*1, Yaroslav I. Molkov2, Thomas Stell2, Florent Krust3, Frédéric Brocard3
1 Department of Neurobiology and Anatomy, Drexel University College of Medicine,
Philadelphia, PA, USA
2 Department of Mathematics and Statistics and Neuroscience Institute, Georgia State
University, Atlanta, GA, USA
3 Institut de Neurosciences de la Timone, Aix Marseille University, CNRS, Marseille, France


*Email: rybak@drexel.edu


Introduction

Spinal motoneurons represent output elements of spinal circuitry that activate skeletal muscles to produce motor behaviors. Firing behavior of many motoneurons is characterized by bistability allowing them to maintain a self-sustained spiking activity initiated by a brief excitation and stopped by a brief inhibition. Serotonin can induce or amplify bistability, influencing motor behaviors. Biophysical mechanisms of bistability involve nonlinear interactions of specific ionic currents. Experimental studies identified ionic currents linked to bistability [1,2]. Using computational modeling, we simulate motoneuronal bistability and analyze the roles of key ionic currents in its generation and regulation.

Methods
We have developed a conductance-based mathematical model of spinal motoneuron to explore and analyze the role of different ionic currents and their interactions in generation and control of motoneuronal bistability under different conditions. The one-compartmental model includes main spike-generating currents, fast sodium (INaF) and potassium rectifier (IKdr), as well as persistent sodium (INaP), slowly inactivating potassium (IKv1.2aka potassium A,IKA), high-voltage activated calcium (ICaL), Ca2+-activated cation non-specific (ICAN), and Ca2+-dependent potassium (IKCa, associated with SK channels) currents. Additionally, the model incorporates the intracellular Ca2+dynamics including calcium-induced calcium release mechanism (CICR).
Results
Our simulations show that bistability in motoneurons relies onICAN, activated by intracellular Ca2+accumulated byICaLand the CICR mechanism. Two other currents play modulatory roles withINaPaugmenting bistability andIKCaattenuating or abolishing it. The interplay betweenICANandIKCashapes the membrane potential dynamics, producing post activation afterdepolarization (ADP) or afterhyperpolarization (AHP), withIKv1.2modulating the membrane potential dynamics. Under certain conditions (such as an elevated extracellular K+concentration),INaPcan sustain bistability independently ofICAN.
Discussion
Our findings clarify the ionic basis of motoneuron bistability, underscoring its reliance on current interactions and external conditions, and offer insights into motor function and potential therapeutic strategies for motor disorders. Our results suggest that serotonin can induce or increase motoneuron bistability by amplifyingICAN(e.g., via increased intracellular Ca2+concentration due to an increasedICaLor via 5-HT3 receptors), activation ofINaPor suppression ofIKCa(both through 5-HT2 receptors).




Acknowledgements
No
References
● Harris-Warrick, R.M., Pecchi, E., Drouillas, B., Brocard, F., & Bos R. (2024). Effect of size on expression of bistability in mouse spinal motoneurons. Journal of Neurophysiology, 131(4), 577-588.https://doi.org/10.1152/jn.00320.2023.
● Bos, R., Drouillas, B., Bouhadfane, M., Pecchi, E., Trouplin, V., Korogod, S.M., & Brocard, F. (2021) Trpm5 channels encode bistability of spinal motoneurons and ensure motor control of hindlimbs in mice. Nature Communication, 12(1), 6815.https://doi.org/10.1038/s41467-021-27113-x.



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P263: Enhancing Neuronal Modeling with a Modified Hodgkin-Huxley Approach for Ion Channel Dynamics
Tuesday July 8, 2025 17:00 - 19:00 CEST
P263 Enhancing Neuronal Modeling with a Modified Hodgkin-Huxley Approach for Ion Channel Dynamics

Batoul M. Saab*1, Jihad Fahs2, Arij Daou1

1Biomedical Engineering Program, American University of Beirut, Lebanon
2Department of Electrical and Computer Engineering, American University of Beirut


*Email: bms28@mail.aub.edu


Introduction
The development of precise physical models is imperative for comprehending and manipulating system behavior. Neuronal firing models serve as a pivotal exemplar of intricate biological modeling, crucial for unraveling neural functionality across both normal cognitive processes and pathological disease states. Achieving accurate dynamical modeling of neuronal firing necessitates the meticulous fitting of model parameters through data assimilation, utilizing experimentally gathered recordings. This endeavor poses significant theoretical challenges due to two primary factors: (a) neuronal action potentials are the aggregate result of active nonlinear dynamics interconnecting various neuronal compartments, parameterized by a multitude of unknown variables, and (b) the stochastic nature of the noisy environmental stimuli influencing neuronal activity.

Methods
In practice, the fitting of a substantial number of parameters is constrained by the scarcity of observable outputs (recording sites), the complexity of the underlying models, and the time-intensive and expensive nature of conducting experiments under controlled conditions [1]. While neurophysiologists are restricted to a limited range of feasible injection current waveforms, we propose herein to investigate the parameter estimation conundrum of model neurons using diverse quality metrics and processing techniques. Our approach involves optimizing a biophysically realistic model for these neurons [2] using intracellular data obtained via the whole-cell patch-clamp technique from basal-ganglia projecting cortical neurons in brain slices of zebra finches.
Results
We proceed with adopting a different approach than that adopted by Hodgkin and Huxley [3] in their seminal work whereby we model the activation functions directly using Hill functions rather than fitting both opening rate constants by exponential functions. Our approach provides additional flexibility and is biologically interpretable. Furthermore, using this modified model, we conduct exhaustive searches on a large subset of the model parameters and test different functional metrics to check which one(s) generate reliable and realistic fits to the biological data.
Discussion
The long-term benefits of this approach include the capability to examine large-scale dynamic phenomena in insightful manners, enhancing model accuracy and streamlining experimentation time. By refining parameter estimation methods and employing biologically interpretable mathematical representations, we aim to improve our understanding of neuronal firing dynamics and provide a robust framework for future computational neuroscience research.






Acknowledgements
This work was supported by the University Research Board (URB) and the Medical Practice Plan (MPP) grants at the American University of Beirut.
References
Reference 1:https://doi.org/10.48550/arXiv.1609.00832
Reference 2:10.1152/jn.00162.2013
Reference 3:10.1113/jphysiol.1952.sp004764
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P264: Paths to depolarization block: modeling neuron dynamics during spreading depolarization events
Tuesday July 8, 2025 17:00 - 19:00 CEST
P264 Paths to depolarization block: modeling neuron dynamics during spreading depolarization events

Maria Luisa Saggio*1, Damien Depannemaecker1, Roustem Khazipov2,3, Daria Vinokurova2, Azat Nasretdinov2, Viktor Jirsa1, Christophe Bernard1


1 Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
2Laboratory of Neurobiology, Institute of Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia
3Aix-Marseille University, INMED, INSERM, Marseille, 13273, France

*Email: maria-luisa.saggio@univ-amu.fr


Introduction
Spreading Depolarization (SD) is a pathological state of the brain involved in several brain diseases, including epilepsy and migraine. It consists of a slowly propagating wave of nearly complete depolarization of neurons, classically associated with a depression of cortical activity. Recent findings challenge this homology [1]: during SD events, which only partially propagate from the cortical surface to depth, neuronal activity may be suppressed, unchanged or elevated. In layers invaded by SD, neurons lose their ability to fire entering Depolarization Block (DB) and far from the SD neurons maintain their membrane potential. However, neurons in between unexpectedly displayed patterns of prolonged sustained firing.

Methods
In the present work [2], we build a phenomenological model, incorporating some key features observed during DB in this dataset (current-clamp patch-clamp recordings from L5 pyramidal neurons in the rat somatosensory cortex during evoked SDs), that can predict the new patterns observed. We model the L5 neuron as an excitable system close to a SNIC bifurcation [3], using the normal form of the unfolding of the degenerate Takens-Bogdanov singularity for the fast dynamics [4], a minimal yet dynamically rich dynamical system. The fast subsystem is modulated by the dynamics of two slow variables, implementing homeostatic and non-homeostatic reactions to inputs.
Results
The model’s bifurcation diagram provides a map for neural activity that includes baseline behavior, sustained oscillations, and DB. We identify five qualitatively different scenarios for the transition from healthy activity to DB, through specific sequences of bifurcations. These scenarios encompass and expand on the mechanisms for DB present in the modeling literature, account for the novel patterns observed in our dataset,and allow us to understand them from a unified perspective. Time series in our dataset are consistent with the scenarios, however, the presence of bistability, distinguishing some of the scenarios, cannot be inferred by our analysis. We further use the model to investigate mechanisms for the return to baseline.
Discussion
Understanding how brain circuits enter and exit SD is important to designing strategies aimed at preventing or stopping it. In this work, we use modeling to gain mechanistic insights into the ways a neuron can transition to DB or different patterns of sustained oscillatory activity during SD events, as observed in our dataset. While our work provides a unified perspective to understanding the modeling of DB, ambiguities remain in the data analysis. These ambiguities could be solved by scenario-dependent theoretical predictions, for example for the effect of stimulation, for further experimental testing.




Acknowledgements
Funded by the Russian Science Foundation grant № 24-75- 10054 to AN (https://rscf.ru/en/project/24-75-10054/) and the European Union grant № 101147319 to MS, DD and VJ.
References
[1] Nasretdinov, A., Vinokurova, D., Lemale, C. L., Burkhanova-Zakirova, G., Chernova, K., Makarova, J., ... & Khazipov, R. (2023). Diversity of cortical activity changes beyond depression during spreading depolarizations. Nature Communications, 14(1), 7729.
[2] Saggio et al (In preparation)
[3] Izhikevich, E. M. (2007). Dynamical systems in neuroscience. MIT press.
[4] Dumortier, F., Roussarie, R., & Sotomayor, J. (1991). Generic 3-parameter families of planar vector-fields, unfoldings of saddle, focus and elliptic-singularities with nilpotent linear parts.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P265: Bifurcations and bursting in the Epileptor
Tuesday July 8, 2025 17:00 - 19:00 CEST
P265 Bifurcations and bursting in the Epileptor

Maria Luisa Saggio*1, Viktor Jirsa1

1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France

*Email: maria-luisa.saggio@univ-amu.fr

Introduction

Large-scale patient-specific models could provide clinicians with an additional tool for the evaluation of the best surgery strategy for drug-resistant epileptic patients, leveraging on the possibility of revealing otherwise hidden complex network and dynamical effects, testing clinical hypotheses, or finding unbiased optimal surgery strategies. One of these frameworks, the Virtual Epileptic Patient (VEP), is currently undergoing validation in a prospective clinical trial involving over 300 patients. It employs the Epileptor [1], a phenomenological mesoscopic model for the most common seizure class, defined in terms of the onset/offset bifurcations marking the transitions between interictal and ictal states are modeled as bifurcations.


Methods
The Epileptor onset/offset bursting class is known in the literature in dynamical systems as square-wave bursting. In this study [2], we utilize insights from a more generic model for square-wave bursting, based on the unfolding of a high codimension singularity, to guide the bifurcation analysis of the Epileptor and gain a deeper understanding of the model and the role played by its parameters. We use analytical methods, numerical continuation of bifurcation curves and model simulations.
Results
We identify a key region in parameter space of topological equivalence between the two models and demonstrate how the Epileptor’s parameters can be modified to produce activities for other seizure classesas predicted by the generic model approach.. Finally, we reveal how the interaction with an additional mechanism for spike-and-wave discharges present in the Epileptor alters the bifurcation structure of the main burster pushing it across a sequence of Supercritical Hopf bifurcations that modulate the oscillatory activity typical of the ictal state.
Discussion
Exploring the full potential of the Epileptor model in terms of bursting dynamics and understanding how to set the parameters to obtain different classes is an important step to (i) enhance our understanding of the model at the core of the VEP framework and (ii) explore the possibility of further personalizing the VEP model. In fact, patients may experience seizures compatible with classes other than square-wave [3]. While the impact of the class on the VEP outcome has not yet been investigated, we know that different classes may exhibit variations in synchronization and propagation properties, warranting further exploration.





Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project).
References
[1]Jirsa, V. K., Stacey, W. C., Quilichini, P. P., Ivanov, A. I., & Bernard, C. (2014). On the nature of seizure dynamics.Brain,137(8), 2210-2230.
[2]Saggio, M. L., & Jirsa, V. (2024). Bifurcations and bursting in the Epileptor.PLOS Computational Biology,20(3), e1011903.
[3] Saggio, M. L., Crisp, D., Scott, J. M., Karoly, P., Kuhlmann, L., Nakatani, M., ... & Stacey, W. C. (2020). A taxonomy of seizure dynamotypes.Elife,9, e55632.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P266: Two distinct bursting mechanisms in cold-sensitive neurons of Drosophila Larva
Tuesday July 8, 2025 17:00 - 19:00 CEST
P266 Two distinct bursting mechanisms in cold-sensitive neurons of Drosophila Larva

Akira Sakurai, Natalia V. Maksymchuk, Sergiy M. Korogod, Daniel N. Cox,Gennady S. Cymbalyuk*
Neuroscience Institute, Georgia State University, Atlanta, GA 30302-5030, USA

*Email: gcymbalyuk@gmail.com


Introduction
In Drosophila larvae, noxious low temperatures are detected by CIII primary sensory neurons lining the inside of the body wall [1,2]. About half of these neurons respond to rapid temperature drops with transient bursts, producing a clear spike-rate peak that likely signals the rapid change. Previously, we developed a biophysical model, which captured various extracellularly recorded cold-evoked CIII responses [2]. Here, having overcome the challenge posed by the small size of CIII neurons and obtained intracellular recordings, we used the waveforms of bursting to identify two distinct types of bursting generated by these neurons.
Methods
We used electrophysiological intracellular and extracellular recordings and modeling to investigate the mechanisms underlying pattern generation by CIII neurons. We upgraded the model [2], by including dynamics of concentrations of Cl-, Na+, and K+, since Ca2+-activated Cl-current (ICaCl) was implicated in CIII dynamics [3]. We investigated the patterns caused by injected current, a drop in extracellular Cl-, and drop of temperature. We also considered a simplified model with an effective (e) leak current including Cl-currents lumped together with Na+and K+leak currents. We map oscillatory and silent regimes under variation of EeLeakand geLeakand compare the model activity to the experimental data.
Results
At ambient temperatures, CIII neurons exhibited a stationary state around -40 mV and sporadic spikes at 1.0 ± 1.3 Hz (N = 20). In the activity of 90% of sporadically spiking neurons, elliptic bursts with an intra-burst spike frequency of 6.0 ± 1.7 Hz were detected. With a temperature drop from 24°C to 10°C, CIII neurons depolarized and spiked at 2.9 ± 1.5 Hz. In 45% of neurons, square-wave bursts with the intra-burst spike frequency 38.2 ± 19.5 Hz were observed. Similar square-wave bursting and high-frequency spiking were induced by direct depolarizing injected currents. Low-Cl⁻conditions induced transitions between patterns of activity dominated by spiking, fast bursting, or slow bursting.
The model represents waveform properties of the experimentally recorded bursting under variation of the injected current, extracellular Cl-, and temperature. We found large parameter domains of silent and spiking regimes at low and high EeLeak, respectively, and a domain of square-wave bursting in an intermediate range of geLeakas EeLeak. In a certain range of geLeakas EeLeakgrows the model transitions from silence to elliptic bursting and then to spiking. These transitions qualitatively map transitions observed in experimental data.
Conclusion

We identified two distinct types of bursting patterns—elliptic bursting and square-wave bursting in responses of CIII neurons. These findings enhance our understanding of the temperature sensing in insect peripheral sensory neurons, providing insights into how sensory systems respond to environmental stimuli.



Acknowledgements
NIH grant R01NS115209 to DNC and GSC.
References
·1. Turner, H. N., et al. (2016).Current Biology, 26(23): 3116-3128.https://doi.org/10.1016/j.cub.2016.09.038
·2. Maksymchuk, N., et al. (2022).Frontiers in Cellular Neuroscience,16, 831803.https://doi.org/10.3389/fncel.2022.831803

·3. Himmel, N. J., et al., (2023).eLife, 12, e76863.https://doi.org/10.7554/eLife.76863
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P267: Real-time closed-loop perturbation of electrically coupled neurons to characterize sequential dynamics in CPG circuits
Tuesday July 8, 2025 17:00 - 19:00 CEST
P267 Real-time closed-loop perturbation of electrically coupled neurons to characterize sequential dynamics in CPG circuits

Pablo Sanchez-Martin*¹, Alicia Garrido-Peña¹,Manuel Reyes-Sanchez, Irene Elices¹, Rafael Levi¹, Francisco B Rodriguez¹, Pablo Varona¹
1. Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politecnica Superior, Universidad Autónoma de Madrid, Madrid, Spain
*Email: pablo.sanchezm@uam.es

Introduction
Dynamical invariants in the form of robust cycle-by-cycle relationships between intervals that build robust neural sequences have beenobservedrecently in central pattern generatorscircuits (CPGs) [1]. In this study, we analyze the effect of different closed-loop perturbationson electrically coupled neurons that are part of aCPGtodeterminethe associated modulation of sequence interval variability,synchronizationand dynamical invariants.


Methods
This research was performed in the pyloric CPG involving both voltage recordings and current injection in the PD neurons, which are electrically coupled cells in this circuit. Additionally, we recorded extracellularly from the LP neuron to quantify the LPPD delay, an interval that builds a dynamical invariant with the cycle-by-cycle period. We implemented an active electrical compensation procedure [2] on RTXi real-time software, which prevents the recording artifact using a single electrode. Three closed-loop perturbations were delivered on the PD neurons: 1. A Hindmarsh-Rose (HR) model neuron electrically coupled to a PD neuron, thus building a biohybrid circuit. 2. A square pulse current injection during the PD burst. 3. An additional artificial electrical synapse between the two PD neurons.



Results
The electrical coupling with a negative artificial bidirectional synapse did not change the existing invariant relation between the LPPD delay and the period but increased the rhythm variability and increased the Victor-Purpura distance, i.e., reduced the PD synchronization level.The squared pulse perturbation decreased the variability and thus the LPPD delay linear relationship was reduced.The level of synchronization between both PDs was also reduced with the pulse perturbation with respect to the control.The biohybrid circuit built by adding anadditionalelectrical coupling to an artificial HR neuron also reduced the variability but changed the intercept of the linear relationshipi.e.,for the same LPPD delays thePD period was sorter.


Discussion
In this study, we effectively disrupted the dynamics of two electrically coupled neurons with three different perturbations by injecting current into the neurons that modulated the synchronization level.This not onlymodifiedthe dynamics of these neurons but also the whole circuit variability and the associated dynamical invariants.All protocols have been proven effective to study the relationship of electrical coupling and sequential dynamics with the help of real-timeclosed-loopneurotechnologies.




Acknowledgements
Work funded by PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00 and PID2021-122347NB-I00.
References
[1] I. Elices, R. Levi, D. Arroyo, F. B. Rodriguez, and P. Varona. Robust dynamical invariants in sequential neural activity. Scientific Reports, 9(1):9048, 2019.
[2] R. Brette, Z. Piwkowska, C. Monier, M. Rudolph-Lilith, J. Fournier, M. Levy, and A. Destexhe. High-resolution intracellular recordings using a real-time computational model of the electrode. Neuron, 59(3):379–391, 2008.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P268: Modelling Nitric Oxide Diffusion and Plasticity Modulation in Cerebellar Learning
Tuesday July 8, 2025 17:00 - 19:00 CEST
P268 Modelling Nitric Oxide Diffusion and Plasticity Modulation in Cerebellar Learning

Carlo Andrea Sartori1*, Alessandra Maria Trapani1, Benedetta Gambosi1, Alessandra Pedrocchi1, Alberto Antonietti1

1 Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy

* Email:carloandrea.sartori@polimi.it
Introduction

Nitric Oxide (NO) is an important molecule in processes such as synaptic plasticity and memory formation[1]. In the cerebellum, NO is produced by neural NO Synthase expressed in Granule Cells and Molecular Layer Interneurons[2]. NO diffuses freely in tissues beyond synaptic connections, functioning as a volume neurotransmitter. At parallel fiber-Purkinje Cell (pf-PC) synapses[4][5], NO is necessary but not sufficient for both Long Term Potentiation and Depression[6][7]. This study investigates NO role in cerebellar learning mechanisms using a biologically realistic Spiking Neural Network, implementing a NO-dependent plasticity model and testing it with an Eye-Blink Classical Conditioning (EBCC) protocol[8][9].


Methods
We developed the NO Diffusion Simulator (NODS), a Python module modeling NO production and diffusion within a Spiking Neural Network. The model represents the chemical cascade triggered by calcium influx during spikes, leading to NO production[10]. NO diffusion is modeled using the heat diffusion equation and an inactivation term, solved with Green's function[11]. We implemented a NO-dependent supervised Spike-Timing Dependent Plasticity[12]where a termweights synaptic updates based on NO concentration. The model was tested using the EBCC protocol, where the cerebellum learns to associate a Conditioned Stimulus (CS) with an Unconditioned Stimulus (US), generating anticipatory Conditioned Responses (CR) (Fig. 1).
Results
We first validated the equation in NODS with the single source production of NO performed with NEURON simulator[13].Then we investigated the effect of NO in cerebellar learning trough the addition of different background noises. In principle, the incoming CS and US stimuli should exert a depression only at the pf-PC synapses active right before the US stimuli. By adding an increasing noise these learning processes result directly impaired.When including NO-dependent plasticity, we can highlight a different behavior of during a CS and 4 Hz simulation. Here, only the pf-PC synapses receiving the CS stimuli have sufficient NO for plasticity, while the ones randomly activated by noise remain under threshold.
Discussion
The results demonstrate that NO interaction significantly affects synaptic plasticity, dynamically adjusting learning rates based on synaptic activity patterns. This mechanism enhances the cerebellum's capacity to prioritize relevant inputs and mitigate learning interference selectively modulating synaptic efficacy. Our results prove that NO could act as a noise filter, thus focusing learning in the cerebellum only on the relevant inputs for the ongoing task. The NODS implementation connects the molecular processes and large spiking neural network-level learning. This work underscores the critical role of NO in cerebellar function and offers a robust framework for exploring NO-dependent plasticity in computational neuroscience.





Figure 1. Spiking neural network with NODS mechanism. (A) SNN of the cerebellum microcircuit, with the different populations and detail of CS, US and Background Noise stimuli. (B) One trial of the EBCC protocol with timing of the stimuli. (C) The NO production mechanism at a single synapse. (D) NO as volume transmitter at different pf-PC synapses.
Acknowledgements
This research is supported by Horizon Europe Program for Research and Innovation under Grant Agreement No. 101147319 (EBRAINS 2.0). The simulations in NEURON were implemented by Stefano Masoli, Department of Brain and Behavioral Sciences, Università di Pavia, Pavia, Italy.
References
1. https://doi.org/10.1126/science.1470903
2. https://doi.org/10.1016/s0896-6273(00)80340-2
3. https://doi.org/10.1523/JNEUROSCI.4064-13.2014
4. https://doi.org/10.1074/jbc.M111.289777
5. https://doi.org/10.1016/0006-2952(89)90403-6
6. https://doi.org/10.1073/pnas.122206399
7. https://doi.org/10.1016/j.celrep.2016.03.004
8. https://doi.org/10.3389/fnsys.2022.919761
9. https://doi.org/10.3389/fninf.2018.00088
10. https://doi.org/10.1016/j.niox.2009.07.002
11. https://doi.org/10.3389/fninf.2019.00063
12. https://doi.org/10.1109/TBME.2015.2485301
13. https://doi.org/10.1007/978-3-319-65130-9_9
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P269: Modeling Calcium-Mediated Spike-Timing Dependent Plasticity in Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P269 Modeling Calcium-Mediated Spike-Timing Dependent Plasticity in Spiking Neural Networks

Francesco De Santis1*,Carlo Andrea Sartori1*, LeoCottini1, RiccardoMainetti1, MatteoMaresca1, Alessandra Pedrocchi1, Alberto Antonietti1

1 Department of Electronics, Information and Bioengineering,Politecnicodi Milano, Milano, Italy

* Email:francesco.desantis@polimi.it,carloandrea.sartori@polimi.it
Introduction

Calcium dynamics serve asbridge between neuronal activity and synaptic plasticity, orchestrating the biochemical cascades thatdeterminesynaptic strengthening (LTP) or weakening (LTD)[1].Extending thework of Graupner and Brunel [2],Chindemiand colleagues recently introduced a data-constrained model of plasticity based on postsynaptic calcium dynamics in the neocortex[3].Themodel has been developed for NEURON simulationscapturing diverse plasticity dynamics with a single parameter set across pyramidal cell-types. In this work, we translatedChindemi’smodel to a spiking neural network by implementing a point neuron model andaunified synapse, testing it across various calcium-concentration scenarios.

Methods
We developed our model using NESTML[4], an open-source language integrated with NEST[5]simulator, enablingthe application of our models to diverse neural networks. The implemented neuron was built upon the existingHill-Tononi(HT)model, which already incorporates detailed NMDA and AMPA conductance dynamics[6].As inChindemi, the synapse was instead based ontheTsodyks-Markram(TM)stochastic synapse model[7], allowing to manipulatevesicle release probability.Following pairedpre-and post-synaptic activitycalcium-dependent processesinfluencesynaptic efficacyat both sides.Our implementation extends these established components to create a comprehensive framework that captures therelationship between calcium dynamics and synaptic plasticity whilemaintainingcomputational efficiency for network-scale simulations.
Results
We firstvalidatedour model for the TM stochastic synapsepaired withHTmodificationstoaccount forcalcium currentspostsynaptic neuron.Then, we connected two neurons and stimulated either the pre-or post-synapticneuron directly, creatingrespectively NMDA andVDCC calcium currents.Next, we testedthepaired activation of pre-and post-synaptic neurons at varying time intervals.The results of these simulationsare comparable withthe ones ofChindemiet al.Finally, we adjusted LTD and LTP thresholds to match calcium signal properties of pyramidal neurons across different cortical layers. Our simpler point neuron model successfully replicatedfindingsobtained with multicompartmentalmodelswhilemaintainingcomputational efficiency.
Discussion
Our workimplementscalcium-dependent plasticity into an efficientmodel for spiking neurons. Wevalidatedthat our point neuron approach reproduces the complex calcium dynamics and plasticity outcomes across different stimulation patterns. Bymaintainingthe ability to capture layer-specific plasticity with adjusted LTP/LTD thresholds, we preserve biological accuracy while reducing computational demands.Our efficient implementation of calcium-dependent plasticitypossibly enableslarge-scale spiking neural network simulations to study how synaptic mechanisms affect network functionality.



Acknowledgements
The work of AA, AP,CAS, andFDSin this research is supported by Horizon Europe Program for Research and Innovation under Grant Agreement No.101147319 (EBRAINS2.0)andEBRAINS-Italy (European BrainReseArchINfrastructureS-Italy),granted by the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the EuropeanUnion –NextGenerationEU(Project IR0000011, CUP B51E22000150006).

References
1.https://doi.org/10.1016/S0959-4388(99)80045-2
2. https://doi.org/10.1073/pnas.1109359109
3.https://doi.org/10.1038/s41467-022-30214-w
4.doi:10.5281/zenodo.12191059
5.https://doi.org/10.4249/scholarpedia.1430
6.https://doi.org/10.1152/jn.00915.2004
7.https://doi.org/10.1073/pnas.94.2.719
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P270: Subthreshold extracellular electric fields alter how neurons respond to naturally occurring synaptic inputs in temporal interference stimulation
Tuesday July 8, 2025 17:00 - 19:00 CEST
P270 Subthreshold extracellular electric fields alter how neurons respond to naturally occurring synaptic inputs in temporal interference stimulation

Ieva Kerseviciute1, Michele Migliore2, Rosanna Migliore2,Ausra Saudargiene*3, Adam Williamson4

1The Life Sciences Center, Vilnius University, Vilnius, Lithuania
2Institute of Biophysics, National Research Council, Palermo, Italy
3Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
4St. Anne’s University Hospital, Brno, Czech Republic

*Email: ausra.saudargiene@lsmu.lt

Introduction



Temporal interference (TI) stimulation enables noninvasive and spatially selective neuromodulation of deep brain structures [1,2]. This approach exploits the nonlinear response of neurons to electric fields by delivering multiple kHz-range oscillations, which interfere and generate an effective low-frequency envelope only at the target site [1,2]. This mechanism allows for selective activation of deep neuronal populations without affecting the overlying tissue. Recent studies have successfully applied this stimulation to the human hippocampus, showing significant effects on memory function [3, 4]. Despite its potential for clinical applications, the neural mechanisms underlying TI-induced effects remain poorly understood.

Methods

We used a biophysically accurate computational neuron model to investigate how subthreshold electric fields influence neural activity in the CA1 hippocampal pyramidal neurons. These neurons receive inputs from Schaffer collaterals, known to play an integral role in memory formation. To replicate this connectivity, we implemented AMPA and NMDA synapses at the proximal apical dendrites, with synaptic activity driven by hippocampal CA3 activity recordedin-vivo. The model neuron was placed in a uniform electric field, simulating the effects of an externally applied field between two conducting plates.

Results

Consistent with previously published modelling results [4], we observed that the electric field strength required to elicit action potentials grew with increasing carrier frequency. Moreover, the subthreshold electric field strength also depended on the orientation of the model neuron in the electric field, requiring higher amplitude when the neuron was perpendicular rather than parallel to the direction of the electric field. Following an long-term potentiation (LTP) induction protocol, the subthreshold stimulation affected the synaptic weight distribution by altering the spike timing, firing frequency, and inter-spike interval patterns. A similar effect was observed with naturally occurring synaptic inputs.

Discussion

In summary, our model shows that subthreshold electric fields alter how neurons respond to naturally occurring synaptic inputs by affecting underlying long-term synaptic plasticity processes. The impact of TI on synaptic plasticity may underlie its effects on memory enhancement, observed in human experiments. The stimulation efficacy is partly determined by the neuron orientation in the electric field, as not all neurons are affected equally. Since our study focuses on single-neuron processes, further research is needed to explore network-level effects.





Acknowledgements




We acknowledge a contribution from the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union – NextGenerationEU (Project IR0000011, CUP B51E22000150006, "EBRAINS-Italy", and support from EU HORIZON-INFRA-2022-SERV-B-01, project 101147319 — EBRAINS 2.0.





References


[1]https://doi.org/10.1016/j.cell.2017.05.024
[2]https://doi.org/10.1126/science.aau4915
[3]https://doi.org/10.1038/s41593-023-01456-8

[4]https://doi.org/10.1101/2024.12.05.24303799
Speakers
avatar for Rosanna Migliore

Rosanna Migliore

Researcher, Istituto di Biofisica - CNR
Computational NeuroscienceEBRAINS-Italy Research Infrastructure for Neuroscience    https://ebrains-italy.eu/
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P271: Accelerated cortical microcircuit simulations on massively distributed memory
Tuesday July 8, 2025 17:00 - 19:00 CEST
P271 Accelerated cortical microcircuit simulations on massively distributed memory

Catherine M. Schoefmann*1,2, Jan Finkbeiner1,2, Susanne Kunkel1


1Neuromorphic Software Ecosystems (PGI-15), Juelich Research Centre, Juelich,
Germany
2RWTH Aachen University, Aachen, Germany

*Email: c.schoefmann@fz-juelich.de
Introduction
Comprehensive simulation studies of dynamical regimes of cortical networks with realistic synaptic densities depend on compute systems capable of running such models significantly faster than biological real time. Since CPUs still are the primary target for established simulators, an inherent bottleneck caused by the von Neumann design is frequent memory access with minimal compute. Distributed memory architectures, popularized by the need for massively parallel and scalable processing for AI workloads, offer an alternative.

Methods
We introduce extensible simulation technology for spiking networks on massively distributed memory using Graphcore's IPUs (https://www.graphcore.ai). We demonstrate the efficiency of the new technology based on simulations of the microcircuit model by [1] commonly used as a reference benchmark. The model represents 1~mm² of cortical tissue, spanning around 300 million synapses, and is considered a building block of cortical function. Spike dynamics are statistically verified by comparison with the same simulations run on CPU with NEST[2].

Results
We present a custom semi-directed communication algorithm especially suited for distributed and constrained memory environments, which allows a controlled trade-off between performance and memory usage. Our simulation code achieves an acceleration factor of 15x compared to real time for the full-scale cortical microcircuit model on the smallest device configuration capable of fitting the model in memory. This is competitive with the current record performance on a static FPGA cluster[3], and further speedup can be achieved at the cost of lower precision weights.

Discussion
With negligible compilation times, the simulation code can be be extended seamlessly to a wide range of synapse and neuron models, as well as structural plasticity, unlocking a new class of models for extensive parameter-space explorations in computational neuroscience. Furthermore, we believe that our algorithm for scalable and parallelisable communication can be efficiently applied to different platforms.
Acknowledgements
The presented conceptual and algorithmic work is part of our long-term collaborative project to provide the technology for neural systems simulations (https://www.nest-initiative.org).
Compute time on a Graphcore Bow Pod64 has been granted by Argonne Leadership Computing Facility (ALCF).
This work is partly funded by Volkswagen Foundation.
References
[1]:https://doi.org/10.1093/cercor/bhs358
[2]:https://doi.org/10.5281/ZENODO.12624784
[3]:https://doi.org/10.3389/fncom.2023.1144143


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P272: Modeling unsigned temporal difference errors in apical dendrites of L5 neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P272 Modeling unsigned temporal difference errors in apical dendrites of L5 neurons

Gwendolin Schoenfeld1,2,3,Matthias C. Tsai*,1,4, Walter Senn4, Fritjof Helmchen1,2,3

1Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
2Neuroscience Center Zürich, University of Zurich and ETH Zurich, Zurich, Switzerland
3University Research Priority Program (URPP), Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
4Computational Neuroscience Group, Department of Physiology, University of Bern, Bern, Switzerland

*Email: tsai@hifo.uzh.ch

Introduction

Learning goal-directed behavior requires the association of salient sensory stimuli with behaviorally relevant outcomes. In the mammalian neocortex, dendrites of pyramidal neurons are suitable association sites, but how their activities adapt during learning remains elusive. Computation-driven theories of cortical function have conjectured that apical dendrites should encode error signals [1,2]. However, little biological evidence has been found to support these proposals. Therefore, we propose a biology-driven approach instead and attempt to explain the function of bottom-up and top-down integration in a model of pyramidal neurons based on experimentally observed apical tuft responses in the sensory cortex during learning.

Methods
We track calcium transients in apical dendrites of layer 5 pyramidal neurons in mouse barrel cortex during texture discrimination learning [3]. Based on this experimental data, we implement a computational model (Fig 1a) incorporating: top-down signals encoding the unsigned temporal difference (TD) error [4], bottom-up signals encoding sensory information, multiplicative gain modulation of firing rates by apical tuft activity, and a local associative plasticity rule comparing top-down signals and somatic firing to dictate apical synapse plasticity. Finally, we test the relevance of apical tuft activity by inhibiting apical tufts during reward and punishment both in our model and experimentally using optogenetics (Fig 1b).
Results
We identify two apical dendrite response types: 1) responses to unexpected outcomes in naïve mice that decrease with growing task proficiency, 2) responses associated with salient sensory stimuli, especially the outcome-predicting texture touch, that strengthen upon learning (Fig 1c). These response types match two distinct unsigned components of the temporal difference error. Our computational model demonstrates how these apical responses can support learning by selectively amplifying the responses of neurons conveying task-relevant sensory signals. This model is contingent upon top-down signals encoding unsigned TD error components, bottom-up signals encoding sensory stimuli, and apical synapses following an associative plasticity rule.
Discussion
Our findings indicate that L5 tuft activities might transmit a salience signal responsible for selectively amplifying neuronal activity during relevant time windows. This picture is in line with theories claiming that the top-down feedback onto apical dendrites is involved in credit assignment. However, instead of transmitting neuron-specific signed errors, our work suggests that the brain could employ a two-step strategy to assign credit to individual neurons. By first solving the temporal credit assignment problem, a temporally precise top-down salience signal can be broadcast to sensory regions, which in a second step — involving local associative plasticity — can be leveraged to recognize and amplify task-relevant responses.




Figure 1. Fig 1. a, Left: Two unsigned TD error components. Middle: Model schematic. Right: State-value estimate and its temporal derivative (signed and unsigned). b, Optogenetic inhibition time during trials (top) and across training (middle). Bottom: Number of trials to reach expert performance in mice and model. c, Calcium imaging (left) and its model (right) across learning for sensory or outcome types.
Acknowledgements
This work was supported by the Swiss National Science Foundation, the European Research Council, the Horizon 2020 European Framework Programme , and the University Research Priority Program (URPP) ‘Adaptive Brain Circuits in Development and Learning’ (AdaBD) of the University of Zurich.
References
1. https://doi.org/10.1016/j.tins.2022.09.007
2. https://doi.org/10.1016/j.tics.2018.12.005
3. https://doi.org/10.1101/2021.12.28.474360
4. https://doi.org/10.1007/BF00115009
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P273: A three-state model for the temporal statistics of mouse behavior
Tuesday July 8, 2025 17:00 - 19:00 CEST
P273 A three-state model for the temporal statistics of mouse behavior

Friedrich Schuessler*1, Paul Mieske2, Anne Jaap3, Henning Sprekeler1

1 Department of Computer Science, Technical University Berlin, German
2German Center for the Protection of Laboratory Animals (Bf3R), German Federal Institute for Risk Assessment, Berlin, Germany
3Department of Veterinary Medicine, Free University of Berlin, Germany

*Email: f.schuessler@tu-berlin.de


Introduction
Neuroscience is undergoing a transition to ever larger and more complex recordings and an accompanying surge of computational models. A quantitative or computational description of behavior, in contrast, is still in dire need [1]. One important aspect of behavior is the temporal structure, which contains rhythmic components (circadian), exponential components with specific time scales (duration of feeding), and components with scale-free temporal dynamics (active motion). Understanding better how these aspects arise and interact, both in the individual and within a group of animals, is an important stepping stone towards computational models of behavior.
Methods
Here we analyze the temporal statistics of behavior of mice housed in different environments and group sizes. The main analyses are based on RFID detections of antennae placed throughout the housing modules. We make particular use of the statistics of inter-detection intervals (IDIs).
Results
We find that behavior spanning seconds to hours can be separated into three distinct temporal ranges: short (0-2min), intermediate (2-20min), long (>20min). IDIs for intermediate and long ranges follow two distinct exponential distributions. Short IDIs are more consistent with a power law or mix of multiple time scales. Blocks of successive short IDIs also follow an exponential distribution. We introduce a simple Markov model that reproduces the temporal statistics.Using additional video recordings, we link the temporal regimes to behavior: Short IDIs to explorative or interactive behaviors, intermediate IDIs to feeding and grooming, and long IDIs to sleeping.
Discussion
Our results show a surprisingly simple structure: Behavior on a fast time scale is interrupted by Internal demands on slower time scales: bouts of fast activity are cut off by the need to feed, and longer sequences of activity and feeding are interrupted by the need to sleep. The short-time aspects of behavior match with observations of scale-free statistics in previous studies [2, 3], but also show interesting deviations potentially due to the interactions in the group. Taken together, our results open up the possibility to understand behavior through the lens of simple models, and raise questions about the neural mechanisms underlying the observed structure.

Acknowledgements
We are grateful for funding by the German Research Foundation (DFG) through the Excellence Strategy program (EXC-2002/1 - Project number 390523135).
References
[1] Datta, S. R., Anderson, D. J., Branson, K., Perona, P., & Leifer, A. (2019). Computational neuroethology: a call to action.Neuron,104(1), 11-24.
[2] Nakamura, T., Takumi, T., Takano, A., Aoyagi, N., Yoshiuchi, K., Struzik, Z. R., & Yamamoto, Y. (2008). Of mice and men—universality and breakdown of behavioral organization.PLoS one,3(4), e2050.
[3] Bialek, W., & Shaevitz, J. W. (2024). Long timescales, individual differences, and scale invariance in animal behavior.Physical review letters,132(4), 048401.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P274: Dynamics of sensory stimulus representations in recurrent neural networks and in mice
Tuesday July 8, 2025 17:00 - 19:00 CEST
P274 Dynamics of sensory stimulus representations in recurrent neural networks and in mice

Lars Schutzeichel*1,2,3, Jan Bauer1,4,5, Peter Bouss1,2, Simon Musall3, David Dahmen1and Moritz
Helias1,2


1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Germany
2Department of Physics, Faculty 1, RWTH Aachen University, Germany
3Institute of Biological Information Processing (IBI-3), Jülich Research Centre, Germany
4Gatsby Unit for Computational Neuroscience, University College London, United Kingdom
5Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Israel


*Email: lars.schutzeichel@rwth-aachen.de
Introduction

The information about external stimuli is encoded in the responses of neuronal populations in the brain [1,2], forming neural representations of the stimuli. The diversity of responses is reflected in the extent of these neural representations in neural state space (Fig. 1a). In recent years, understanding the manifold structure underlying neuronal responses [3] has led to insights into representations in both artificial [4] and biological networks [5]. Here, we extend this theory by examining the role of recurrent network dynamics in deforming stimulus representations over time and their influence on stimulus separability (Fig. 1b). Furthermore, we assess the information conveyed for multiple stimuli (Fig. 1c).
Methods
We simulate recurrent networks of binary neurons and study their dynamics analytically using a two-replica mean-field theory, reducing the dynamics of complex networks to only three relevant dynamical quantities: the population rate and the representation overlaps within and between stimulus classes. These networks are fit to Neuropixels recordings from the superior colliculus of awake behaving mice. To assess the information conveyed by multiple stimuli, we analyze the mutual information between an optimally trained readout and the stimulus class. To calculate the overlap of representations within and across stimulus classes, we utilize spin glass methods [6].
Results
Stimulus separability and its temporal dynamics are shaped by the interplay of three dynamical quantities: the mean population activity E and the overlaps θ= and θ≠, which represent response variability within and across stimulus classes, respectively (Fig. 1b). For multiple stimuli, there is a trade-off: as the number of stimuli increases, more information is conveyed, but stimuli become less separable due to their growing overlap in the finite-dimensional neuronal space (Fig. 1c). We find that the experimentally observed small population activity R lies in a regime where information grows extensively with the number of stimuli, sharply separated from a second regime in which information converges to zero.
Discussion
Separability is a minimal requirement for meaningful information processing: The signal propagates to downstream areas, where, along the processing hierarchy, representations of different perceptual objects must become increasingly separable to enable high-level cognition. Our theory reveals that sparse coding not only provides a crucial advantage for information representation but is also a necessary condition for non-vanishing asymptotic information transfer. Our work thus provides a novel understanding of how collective network dynamics shape stimulus separability.



Figure 1. Overview. a: Stimulus representations characterized by their distance from the origin R and their extent θ=. b: Temporal evolution of representations of stimuli from two classes. A linear readout quantifies the separability between the classes of stimuli for every point in time. c: The separability measure also determines the information content in the population signal for P≥2 stimuli.
Acknowledgements
This work has been supported by DFG project 533396241/SPP2205
References
[1]https://doi.org/10.1126/science.3749885
[2]https://doi.org/10.1016/j.tics.2013.06.007
[3]https://doi.org/10.1103/PhysRevX.8.031003
[4]https://doi.org/10.1038/s41467-020-14578-5
[5]https://doi.org/10.1016/j.cell.2020.09.031
[6]https://doi.org/10.1088/1751-8121/aad52e
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P275: A Computational Framework for Investigating the Impact of Neurostimulation on Different Configurations of Neuronal Assemblies
Tuesday July 8, 2025 17:00 - 19:00 CEST
P275 A Computational Framework for Investigating the Impact of Neurostimulation on Different Configurations of Neuronal Assemblies


Spandan Sengupta*1, 2, Milad Lankarany1, 2, 3, 4, 5

1Krembil Brain Institute, University Health Network, Toronto, ON, Canada
2Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
3Department of Physiology, University of Toronto, Toronto, ON, Canada
4KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada5Center for Advancing Neurotechnological Innovation to Application (CRANIA), Toronto, ON, Canada

*Email: spandan.sengupta@mail.utoronto.ca
Introduction
Pathological oscillations in brain circuits are a known biomarker of neurological disorders, exemplified by increased power in the beta (12-30 Hz) frequency band in Parkinson’s disease [1]. Neurostimulation techniques like Deep Brain Stimulation (DBS) can disrupt pathological oscillations and improve symptoms [2]. However, how and why different stimulation patterns have different impacts on circuits is not fully understood. Recent studies show stimulation-induced biomarkers such as Evoked Resonant Neural Activity (ERNA) [3] associated with stimulation patterns (e.g. frequency) and circuit motifs (e.g. strength of excitatory-inhibitory connectivity) [4]. To study how stimulation patterns impact different circuit motifs, we developed a computational framework to model electrical stimulation on pre and post synaptic activity of neurons embedded in neuronal networks.
Methods
We wanted to study the effect of electrical stimulation, and in particular, the effect of different frequencies during and after stimulation on different circuit motifs. We employed spiking neural networks composed of leaky integrate-and-fire (LIF) neurons combined in a variety of excitatory-inhibitory configurations. To model DBS, we implemented perturbations analogous to electrical stimulation. To further explore how electrical stimulation affects pathological oscillations, we used Brunel’s network [5] tuned to show oscillatory activity at specific frequencies. We aim to study how different patterns of stimulation can suppress these oscillations.

Results
Aligned with experimental findings, our simulations demonstrated that continuous high-frequency electrical stimulation induced more suppression of neuronal activity compared to low-frequency stimulation [6]. In some circuit motifs, we also observed sustained low-frequency oscillatory activity after the high-frequency stimulation had ended (Fig 1B). We aim to characterise the impact of different frequencies on Brunel’s network and their ability to suppress pathological oscillations. We expect to find stimulation patterns that can disrupt these oscillations and qualitatively the circuit activity to a more healthy physiological state. We expect these frequencies to be dependent on the excitatory-inhibitory characteristics of the network.


Discussion
Our study utilizes a simplified model of LIF neurons configured in different motifs, offering a foundational understanding of oscillation modulation with different patterns of electrical stimulation. Future research can expand this model to incorporate more biophysically realistic circuits, such as those found in the hippocampus, critical for memory processing[7], or the basal ganglia, implicated in movement disorders[8]. Investigating these complex circuits will further bridge the gap between computational models and the intricate dynamics of brain networks in health and disease, potentially leading to refined therapeutic strategies.




Figure 1. A: Schematic of a circuit comprised of populations A (exc) and B (inh) that project to O, along with recurrent connections. Electrical stimulation is applied to neurons in A B: Population firing rate during and after 100 Hz electrical stimulation. Dashed red lines indicate that start and end of stimulation. C: Schematic for DBS implementation
Acknowledgements
I would like to thank Dr Frances Skinner (University of Toronto) for her supervision and her help in conceptualising this research idea. I would also to thank Dr Shervin Safavi (Max Planck Institute for Biological Cybernetics) and Dr Thomas Knoesche (Max Planck Institute for Human Cognitive and Brain Sciences) for their help with the modelling and theoretical aspects of this work.

References

https://doi.org/10.1152/jn.00697.2006

https://doi.org/10.1002/mds.22419

https://doi.org/10.1002/ana.25234

https://doi.org/10.1016/j.nbd.2023.106019

https://doi.org/10.1023/A:1008925309027

https://doi.org/10.1016/j.brs.2021.04.022

https://doi.org/10.1038/nature15694

https://doi.org/10.1016/j.baga.2011.05.001




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P276: Simulation tools for reaction-diffusion modeling
Tuesday July 8, 2025 17:00 - 19:00 CEST
P276 Simulation tools for reaction-diffusion modeling

Saana Seppälä*1, Laura Keto1, Derek Ndubuaku1, Annika Mäki1, Tuomo Mäki-Marttunen1, Marja-Leena Linne1, Tiina Manninen1

1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland

*Email: saana.seppala@tuni.fi


Introduction
With advancements in computing power and cellular biology, the simulation of reaction-diffusion systems has gained increasing attention, leading to the development of various simulation algorithms.While we have experience testing separate tools for cell signaling and neuronal networks [1–4], we have not extensively evaluated cell-level tools that integrate both reaction and diffusion algorithms or support co-simulation. This study aims to provide a comprehensive assessment of reaction-diffusion and co-simulation tools, including NEURON [5], ASTRO [6], and NeuroRD [7], to determine their suitability for our research needs.


Methods
Most available reaction-diffusion algorithms and tools are based on partial differential equations or the reaction-diffusion master equation simulated using an extended Gillespie stochastic simulation algorithm [8]. In this study, we implement identical diffusion, reaction, and reaction-diffusion models across selected tools, testing both simple and complex cell morphologies. We conduct simulations, compare results across different tools, and evaluate their consistency. Additionally, we assess the usability and suitability of each tool in various simulation settings, including ease of implementing cell morphologies and equations, computational efficiency, and support for co-simulation.


Results
The simulation algorithms and tools vary significantly in usability and functionality. For instance, some tools support realistic cell morphologies, while others are limited to simplified geometries such as cylinders. Additionally, not all tools allow implementation of reactions involving three reactants, restricting their applicability for certain biological simulations. Despite these differences, a comparison of simulation results across the tools reveals a high degree of similarity, indicating that the underlying models produce consistent outcomes. Furthermore, variations in computational efficiency and ease of implementation are observed, highlighting trade-offs between flexibility, accuracy, and usability across the tools.


Discussion
A thorough understanding of the properties and capabilities of different reaction-diffusion simulation tools is essential for developing more advanced and biologically accurate models. Evaluating these tools provides valuable insights into their strengths and limitations, facilitating the integration of multiple simulation approaches. In particular, this knowledge enables the development of co-simulations that combine reaction diffusion models with spiking network simulations, enhancing the accuracy and scope of computational neuroscience research.




Acknowledgements
This work was supported by the Research Council of Finland (decision numbers 330776, 355256 and 358049), the European Union's Horizon Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project), and the Doctoral School at Tampere University.


References
1.https://doi.org/10.1093/bioinformatics/bti018
2.https://doi.org/10.1155/2011/797250
3.https://doi.org/10.3389/fninf.2018.00020
4.https://doi.org/10.1007/978-3-030-89439-9_4
5.https://doi.org/10.1017/CBO9780511541612
6.https://doi.org/10.1038/s41467-018-05896-w
7.https://doi.org/10.1371/journal.pone.0011725
8.https://doi.org/10.1021/j100540a008
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P277: Relation of metaplasticity with Hebbian, structural and homeostatic plasticities in recurrent neural networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P277 Relation of metaplasticity with Hebbian, structural and homeostatic plasticities in recurrent neural networks

Muhammad Abdul-Amir Shamkhi Al-Shalah1,3, Neda Khalili Sabet2,3, Delaram Eslimi Esfahani3*
1-Department of Secondary Education, Ministry of Education, Babylon Governorate, Iraq
2- Institute of Biology, University ofFreiburg, Freiburg im Breisgau, Germany
3-Department of Animal Biology, Faculty of Biological Sciences, Kharazmi University, Tehran, Iran

Email*:eslimi@khu.ac.ir
Introduction

The Brain plasticites rolling the brain tissues and coordinate its action have different form like hebbian, structural, homeostatic and metaplasticity , Each type of plasticity affects rewiring and influence flow in the brain in neural and circuit level. Furthermore, these different plasticities have relation & interaction between them, the previous studies did not fully cover the relation between all these plasticities.

The objective of this study is to examine and analyses the relation between these plasticity focusing on metaplasticity interaction with hebbain, structural and homeoplasticity.
Methods
This study uses computer simulation and neural networks to explore the relation between structural plasticity, Hebbian, homeostatic, and metaplasticity.
We used artificial neural networks in this study. In our neural network, we modelled neurons as nodes, synapses as edges, and different types of plasticity as network features. We have chosen Python as the programming language to implement our model, and we use the Nest library, one of the most specialised and advanced tools for computational neuroscience research.
Our model contains 500 neurons; hence it has 500 layers. This prevents neuronal supremacy while building or deleting connections. We used the LIF (leaky integrate and fire) neural model or a more specific gif cond. exp. (generalized integrate-and-fire neuron with multiple synaptic time constants).
Results
The result of this study when we examine the different types of plasticity and interaction between them, the metaplasticity caused the growth of synaptic surpluses, which it depends on the amount of receiving stimuli from inside and outside the network. While Structural plasticity causes the use of these excesses in rewiring the network and changing its connections. TheHebbianplasticity from another hand causes the increase or decrease of connections when receiving stimulations and reducing them,
Discussion
finally, in conclusion, homeostatic plasticity shows control on the network in all phases and that lead to regain the network to its original frequency when the stimulation ended.





Acknowledgements
We must express our appreciation to the Vice Chancellor for Research at Kharazmi University for supporting our research.
References
1-doi: 10.1093/nsr/nwaa129.
2-DOI:10.13140/RG.2.2.18527.48803
3-DOI:10.1007/s13194-012-0056-8
4-doi: 10.1111/j.1365-2923.2010.03708.x.
5-doi.org/10.3390/brainsci11040487
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P278: Speeding-up Distinct Bursting Regimes is Mediated by Separate Coregulation Pathways
Tuesday July 8, 2025 17:00 - 19:00 CEST
P278 Speeding-up Distinct Bursting Regimes is Mediated by Separate Coregulation Pathways

Yousif O. Shams1, Ronald L. Calabrese2, Gennady S. Cymbalyuk1,*


1Neuroscience Institute, Georgia State University, Atlanta, Georgia, 30302, USA.
2Department of Biology, Emory University, Atlanta, Georgia, 30322, USA


*E-mail: gcymbalyuk@gmail.com

Introduction
Central pattern generators (CPGs) control rhythmic behaviors and adapt to behavioral demands via neuromodulation[1]. The leech heartbeat CPG includes mutually inhibitory heart interneurons (HNs) forming half-center oscillators (HCOs)[1]. Myomodulin speeds up HCO bursting by increasing h-current (Ih)​ and decreasingNa+/ K+pump current(IPump)[2]. These changes create a coregulation path between dysfunctional regimes[3].Along this path, a new functional regime, high-spike frequency bursting (HFB), emerges alongside low-spike frequency bursting (LFB)[4]. Separately, based on interaction ofIPumpand persistentcurrent,creating relaxation oscillator dynamics,dynamical clamp experiments also show a transition into high- frequency bursting (HFBROd)[5].


Methods

We use experimentally validated Hodgkin-Huxley-style models with Na+dynamics incorporated, which are proven effective in predicting HCO behaviors under various experimental and neuromodulatory conditions[3-6].We conduct a two-parameter sweep, maximalIPump(IPumpMax) andconductance ofIh(gh,to map the activity regimes. We investigate how neuromodulation affects the HCO cycle period of the LFB and HFB regimes, and map experimental data onto the map of regimes.


Results
Under variation ofIPumpMaxandgh​, HCO and single HN show a phase transition between HFB and LFB. In LFB, decreasingIPumpMaxspeeds up bursting, consistent with myomodulin neuromodulation [2,3]. In HFBROd, increasingIPumpMaxalso speeds up bursting by shortening burst duration and interburst interval in accordance with relaxation-oscillator dynamics [5]. Mapping experimental cycle period suggests that myomodulin operates along a coregulation path within LFB regime. This mapping of experimental data of cycle period reveals a quasi-orthogonal path where increasingIPumpMax​ speeds up bursting within HFB regime. Transition between the bursting regimes elucidates monensin effects. Monensin, aantiporter, speeds up bursting via raising intracellularconcentration ([Na+]i), thereby increasingIPump[6].


Conclusions
Modeling suggests the emergence of HFB regime alongside LFB, each with distinct responses to neuromodulation. This captures a paradox of speeding up HCO bursting by either increasing or decreasingIPump. LFB and HFB regimes operate with distinct mechanisms for controlling bursting cycle period. This distinction arises from intracellulardynamics. LFB is responsive to coregulationIhof​​ andIPump​. In contrast, HFB operates with relaxation-oscillator dynamics based on[Na+]i. Our results emphasize that transitioning between LFB and HFB enhances the CPG’s robustness and flexibility, allowing for adaptive control of bursting.





Acknowledgements
We acknowledge Georgia State University’s Brains and Behavior program grant to GSC.
References
1.https://doi.org/10.1152/physrev.00003.2024
2.https://doi.org/10.1152/jn.00340.2005
3.https://doi.org/10.1523/JNEUROSCI.0158-21.2021
4.https://doi.org/10.3389/fncel.2024.1395026
5.https://doi.org/10.1523/ENEURO.0331-22.2023
6.https://doi.org/10.7554/eLife.19322
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P279: Bridging In Vitro and In Vivo Neural Data for Ethical and Efficient Neuroscience Research
Tuesday July 8, 2025 17:00 - 19:00 CEST
P279 Bridging In Vitro and In Vivo Neural Data for Ethical and Efficient Neuroscience Research

Masanori Shimono1
[1] Graduate School of Information Science and Technology, Osaka University, Osaka, Japan
E:m-shimono@ist.osaka-u.ac.jp@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:游明朝; panose-1:2 2 4 0 0 0 0 0 0 0; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}@font-face {font-family:AdvOTf23bb480; panose-1:2 11 6 4 2 2 2 2 2 4; mso-font-alt:Calibri; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:auto; mso-font-signature:3 0 0 0 1 0;}@font-face {font-family:"\@游明朝"; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:none; font-size:10.5pt; mso-bidi-font-size:12.0pt; font-family:"游明朝",serif; mso-ascii-font-family:游明朝; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:游明朝; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:游明朝; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standardcontextual;}a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; mso-themecolor:hyperlink; text-decoration:underline; text-underline:single;}a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:#954F72; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"游明朝",serif; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}div.WordSection1 {page:WordSection1;}
Introduction
Neural activity transmits information via binary spike signals, enabling complex brain computations. While this principle is well established, accurately predicting large-scale neural activity patterns remains challenging. Integrating findings from in vitro and in vivo experiments remains unresolved, yet is crucial for advancing neuroscience and establishing ethical, efficient research methodologies.Methods
We propose a machine learning-basedmutual generationframework to enhance neural activity prediction across experimental paradigms by refining previous methodologies [1]. Specifically, we trained a model using in vitro neural data to predict in vivo activity and vice versa (Fig.1). The model, built with multi-region neural recordings, employs deep learning architectures optimized for spatiotemporal pattern recognition (Fig.1-c). The method details are related to a patent and will be explained at the venue.Results
Our results demonstrate accurate prediction of in vivo neural activity from in vitro data and vice versa (Fig.1-e). We also found that data from specific brain regions reliably predict neural activity across multiple areas, suggesting universal principles in brain information processing. These findings have implications for neural modeling, experimental design, and translational neuroscience. Furthermore, high-precision in vivo prediction from in vitro data could reduce animal experimentation, supporting the3R principles(Replacement, Reduction, Refinement).Discussion
This study sets a new standard for ethical, reproducible neuroscience research, bridging fundamental neuroscience and clinical applications.
Figure 1. Fig. 1) This figure illustrates the time duration of extracted data and data partitioning for training and testing. (a,b) In vitro (top) and in vivo (bottom) setups. (c,d) 5-minute training and 2.5-minute test segments are used for prediction. Four conditions are tested: in vitro→in vitro, in vivo→in vivo, in vitro→in vivo, and in vivo→in vitro. (e) ROC AUC scores evaluate prediction performance.
Acknowledgements

MS is supported by several MEXT fundings (21H01352, 23K18493).
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:"Yu Gothic"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:游ゴシック; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:"MS Pゴシック"; panose-1:2 11 6 0 7 2 5 8 2 4; mso-font-alt:"MS PGothic"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 1791491579 134217746 0 131231 0;}@font-face {font-family:"\@游ゴシック"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:"\@Yu Gothic"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:"\@MS Pゴシック"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 1791491579 134217746 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0mm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"MS Pゴシック",sans-serif; mso-bidi-font-family:"MS Pゴシック";}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"游明朝",serif; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}div.WordSection1 {page:WordSection1;}

References

[1] Nakajima, R., Shirakami, A., Tsumura, H., Matsuda, K., Nakamura, E., & Shimono, M. (2023). Mutual generation in neuronal activity across the brain via deep neural approach, and its network interpretation.Communications Biology,6(1), 1105.
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:"Yu Gothic"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:游ゴシック; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:游明朝; panose-1:2 2 4 0 0 0 0 0 0 0; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}@font-face {font-family:"\@游ゴシック"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:"\@Yu Gothic"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:"\@游明朝"; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:none; font-size:10.5pt; mso-bidi-font-size:12.0pt; font-family:"游明朝",serif; mso-ascii-font-family:游明朝; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:游明朝; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:游明朝; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standardcontextual;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"游明朝",serif; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}div.
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P280: Cerebellar neural manifold encoding complex eye movements in 2D
Tuesday July 8, 2025 17:00 - 19:00 CEST
P280 Cerebellar neural manifold encoding complex eye movements in 2D

Juliana Silva de Deus1, Akshay Markanday2, Erik De Schutter1, Peter Thier2,Sungho Hong*1,3



1Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
2Department of Cognitive Neurology, Hertie Institute, University of Tübingen, Tübingen, Germany
3Center for Cognition and Sociality, Institute for Basic Science, Daejeon, South Korea

*Email: sunghohong@ibs.re.kr


Introduction
Kinematic parameters of our movements, such as velocity and duration, can undergo random and systematic changes, but movement endpoints can be precisely maintained. The cerebellum is well-known for its role in thisfunction[1], but how its neurons concurrently encode several kinematic parameters, necessary for movement precision, has been unknown.We recently identified low-dimensional patterns, called the neural manifold, in the activity of the cerebellar neurons and showed that those multi-dimensional patterns encoded the peak velocity and duration of 1D eye movements, contributing to flexible control of those parameters [2]. In this study, we investigated how those findings can extend to 2D eye movements made in different directions.



Methods
We analyzed the activity of 54 cerebellar Purkinje cells (PC) from the oculomotor vermis in three adult male rhesus monkeys performing two different saccadic eye movement tasks. In the first, the animals made 15° saccades from a fixation point to a visual targetrandomly presented at one of the ten angles(0°-315°, 45° intervals).In the second, they performed a cross-axis adaptation task [3] where initial horizontal jumps of a target from a fixation point were followed by 5° vertical leapsbefore finishing the primary saccades. We analyzed the PC simple spike (SS) activity by identifying its low-dimensional manifold and examining how the manifold varies with the saccade angle and complex spike (CS) firing.


Results

In many PCs (n=39), CSs fired between 100ms and 200 ms after a targetonsetwith well-defined preference for certain target directions (θCS-ON), confirming the directional nature of CS firing for retinalslips[4-6]. We also identified the PC-SS manifold (d=4, explaining >88% variance) for saccadic eye movements with a remarkably simple structure comprising direction-independent latent dynamics and dependent, multi-dimensional gain field, generalizing previous studies [2,7]. How CS and SS firings depend on movement direction (θ) in individual PCs was too heterogeneous to show a clear correlation (P=0.22). However, we found that the gain field looked highly organized for θ-θCS-ONbut much less for θ.

Discussion
Together with our previous study [2], these results show that PC population firing has a remarkably simple structure for representing several kinematic parameters of eye movements, such as velocity, duration, and direction, simultaneously and independently via a low-dimensional neural manifold. Our findings suggest that the cerebellar neural circuit generates neural dynamics optimal for flexible and precise control of complex movements with many degrees of freedom.





Acknowledgements
A.M. and P.T. were supported by DFG Research Unit 1847 “The Physiology of distributed computing underlying higher brain functions in non-human primates.” J.S.D., S.H., and E.D.S. were supported by the Okinawa Institute of Science and Technology Graduate University. S.H. was also supported by the Center for Cognition and Sociality (IBS‐R001‐D2), Institute for Basic Science, South Korea.
References
● https://doi.org/10.1146/annurev-vision-091718-015000
● https://doi.org/10.1038/s41467-023-37981-0
● https://doi.org/10.1007/BF00228022
● https://doi.org/10.1038/33141
● https://doi.org/10.1523/JNEUROSCI.4658-05.2006
● https://doi.org/10.1152/jn.90526.2008
● https://doi.org/10.1038/nature15693




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P281: Data-driven biophysically detailed computational modeling of neuronal circuits with the NeuroML standard and software ecosystem
Tuesday July 8, 2025 17:00 - 19:00 CEST
P281 Data-driven biophysically detailed computational modeling of neuronal circuits with the NeuroML standard and software ecosystem

Ankur Sinha*1, Padraig Gleeson1, Adam Ponzi1, Subhasis Ray2, Sotirios Panagiotou3, Boris Marin4, Robin Angus Silver1

1Department of Neuroscience, Physiology and Pharmacology, University College
2TCG CREST, Kolkata, India
3Erasmus University Rotterdam, Rotterdam, Netherlands
4Universidade Federal do ABC, São Bernardo do Campo, Brazil

*Email: ankur.sinha@ucl.ac.uk

Introduction

Computational models are essential for integrating multiscale experimental data into unified theories and generating new testable hypotheses. Realistic models that include biological intricacies of neurons (morphologies, ionic conductances, subcellular processes) are critical tools for gaining a mechanistic understanding of neuronal processes. Their complexity and the disjointed landscape of software for computational neuroscience, however, makes model construction, fitting to experimental data, simulation, and re-use and dissemination a considerable challenge. Here, we present NeuroML and show that it accelerates modelling workflows and promotes FAIR (Findable, Accessible, Interoperable, Reusable) and Open computational neuroscience[1].

Methods
NeuroML provides two components: a standard and a software ecosystem. The standard is specified by a two part schema. The first part constrains the structure of NeuroML models and is used to validate model descriptions and generate libraries for programming languages. The second part consists of corresponding definitions of the dynamics of model entities in the Low Entropy Modelling Specification language[2] that allows translation of NeuroML models into simulator specific formats. The software ecosystem includes libraries and tools for building and working with NeuroML models in addition to a number of simulation engines and other NeuroML compliant tools that support different stages of the model life cycle.
Results
NeuroML is an established standardised language that provides a simulator independent model representation and accompanying ecosystem of compliant tools that support all stages of the model life cycle: creating, validating, visualising, analysing, simulating, optimising, sharing, re-using models. It provides a curated set of model building blocks for constructing new models and thus also serves as a didactic resource. We demonstrate how NeuroML supports the model life cycle by presenting a number of published NeuroML models in different species (C. elegans, rodents, humans) and different brain regions (cortex, cerebellum), highlighting their scientific contributions. We also list resources on using NeuroML and existing models.
Discussion
NeuroML is a mature standard that has evolved over years of interactions with the computational neuroscience community. The NeuroML community has strong links with simulator development communities to ensure that NeuroML remains up to date with the latest modelling requirements, and that tools remain NeuroML compliant. NeuroML also ensures that it remains extensible to cater to modelling entities that are not yet part of the standard. NeuroML also links to other neuroscience initiatives (PyNN, SONATA[3]), systems biology standards (SBML, SED-ML) and machine learning/AI formats (Model Description Format[4]) to promote interoperability. Finally, a large archive of published standardised models supports re-use of existing models.




Acknowledgements
We thank all members of the NeuroML community who have contributed to the development of the standard and the software ecosystem over the years.
References
● https://doi.org/10.7554/eLife.95135
● https://doi.org/10.3389/fninf.2014.00079
● https://doi.org/10.1371/journal.pcbi.1007696
● https://doi.org/10.1007/s10827-024-00871-5



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P282: Population-level mechanisms of model arbitration in the prefrontal cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P282 Population-level mechanisms of model arbitration in the prefrontal cortex

Jae Hyung Woo1*, Michael C Wang1*, Ramon Bartolo2, Bruno B. Averbeck3,Alireza Soltani1+

1Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
2The Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
3Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD 20892, USA
+Email:alireza.soltani@gmail.com
Introduction

One of the biggest challenges of learning in naturalistic settings is that every choice option involves multiple attributes or features, each of which could potentially predict the outcome. To make decisions accurately and efficiently, the brain must attribute the outcome to the relevant features of a choice or action, while disregarding the irrelevant ones. To manage this uncertainty, it is proposed that the brain maintains several internal models of the environment––each predicting outcomes based on different attributes of choice options––and utilizes the reliability of these models to select the appropriate one to guide decision making [1-3].


Methods
To uncover computational and neural mechanisms underlying model arbitration, we reanalyzed data from high-density recordings of the lateral prefrontal cortex (PFC) activity in monkeys performing a probabilistic reversal learning task with uncertainty about the correct model of the environment. We constructed multiple computational models based on reinforcement learning (RL) to fit choice behavior on a trial-by-trial basis, which allowed us to infer animals’ learning and arbitration strategies. We then used estimates based on the best-fitting model to identify single-cell and population-level neural signals related to learning and arbitration in the lateral PFC.

Results
We found evidence of dynamic, competitive interactions between stimulus-based and action-based learning, alongside single-cell and population-level representations of the arbitration weight. Arbitration enhanced task-relevant variables, suppressed irrelevant ones, and modulated the geometry of PFC representations by aligning differential value axes with the choice axis when relevant and making them orthogonal when irrelevant. Reward feedback emerged as a potential mechanism for these changes, as reward enhanced the representation of relevant differential values and choice while adjusting the alignment between differential value and choice subspaces according to the adopted learning strategy.

Discussion
Overall, our results shed light on two major mechanisms for the dynamic interaction between model arbitration and value representation in the lateral PFC. Moreover, they provide evidence for a set of unified computational and neural mechanisms for behavioral flexibility in naturalistic environments, where there is no cue that explicitly signals the correct model of the environment.




Acknowledgements
None
References

1.https://doi.org/10.1038/s41386-021-01123-1
2.https://doi.org/10.1016/j.neubiorev.2020.10.022
3.https://doi.org/10.1038/s41386-021-01108-0
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P283: A biophysical model of CA1 pyramidal cell heterogeneity in memory stability and flexible decision-making
Tuesday July 8, 2025 17:00 - 19:00 CEST
P283 A biophysical model of CA1 pyramidal cell heterogeneity in memory stability and flexible decision-making

Fei Song*1,2, Bailu Si3,4


1State Key Laboratory of Robotics, Shenyang Institute of Automation Chinese Academy of Science, Shenyang, Liaoning, China
2University of Chinese Academy of Sciences, Beijing, Beijing, China
3School of Systems Sciences, Beijing Normal University, Beijing, Beijing, China
4Chinese Institute for Brain Research, Beijing, Beijing, China


*Email: songfeo20160903@gmail.com
Introduction

The entorhinal-hippocampal system is essential for spatial memory and navigation. CA1 integrates spatial and non-spatial inputs via two pathways: the perforant path (PP) and temporoammonic path (TA), processed by pyramidal cells (PCs) [1]. We propose a biophysical model with simple PCs and complex PCs [2]. Simulations in novel environments (Fig. 1a) show sPCs maintain stable spatial coding, while cPCs integrate spatial and attentional inputs, supporting decision-making. In familiar settings (Fig. 1b), cPCs adapt to changes while sPCs preserve stable encoding, enabling memory retention and comparison of past and new experiences. This model unifies CA1’s roles in memory and decision-making.

Methods
We model CA1 as a two-layer network: deep-layer sPCs receive MEC input, while superficial-layer cPCs integrate MEC and LEC signals. Synaptic plasticity follows Hebbian learning, with SC weights adapting via dendritic-somatic co-activation and TA weights via rate-dependent learning, constrained by proximal-distal gradients. Simulations include a 10m track and 5m open-field, where MEC provides grid-cell input, LEC encodes egocentric cues, and CA3 supplies place-cell activity [3,4]. Memory recovery is evaluated via place field stability (JS distance), while stimulus-specific information quantifies spatial and attentional encoding variability [5]. A population decoder (MLP) predicts location and attention from CA1 activity.
Results
CA1 supports flexible decision-making by integrating spatial and perceptual information. In novel environments, sPCs ensure spatial stability, while cPCs encode stimulus-specific cues. A proximal-distal gradient in cPCs appears with fixed cues but disappears with moving cues, confirming their adaptive role. Population decoding shows cPCs excel in attention tracking, while sPCs maintain spatial coding. CA1 also aids memory updating. When CA3 recall is incomplete, CA1 preserves past memories longer than expected, slowing decay. When TA introduces novelty, cPCs encode new inputs while sPCs retain old ones, enabling stable yet adaptive memory processing. This mirrors real-world experiences, such as recognizing familiar but altered locations.
Discussion
Our model captures CA1 neuron heterogeneity and projection preferences in decision-making and memory updating. However, it simplifies CA3’s proximodistal heterogeneity, where pattern separation (proximal) and completion (distal) may influence CA1 dynamics [6]. Future work should refine CA3 input representation. CA1’s dual-pathway structure aligns with cognitive map theory, where novel environments require integration, while familiar ones involve consolidation. This parallels the Tolman-Eichenbaum Machine (TEM) model of hippocampal function [7]. The dual-pathway structure may reflect a generalized neuronal computation mechanism, extending beyond navigation and memory to broader cognitive functions.




Figure 1. Fig. 1 Functional Framework of the Hippocampus. (a) CA1 supports flexible decision-making in novel environments by integrating sensory inputs and generating context-specific representations. (b) CA1 facilitates memory updating in familiar environments by comparing stored memories with current experiences.
Acknowledgements
Not applicable.
References
● https://doi.org/10.1038/nn.2894
● https://doi.org/10.1038/nn.4517
● https://doi.org/10.1016/j.neucom.2020.10.013
● https://doi.org/10.1007/BF00237147
● https://api.semanticscholar.org/CorpusID:10081513
● https://doi.org/10.1371/journal.pbio.2006100
● https://doi.org/10.1016/j.cell.2020.10.024



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P284: Subthalamic LFP Spectral Decay Captures Movement-Related Differences Between Parkinson’s Disease Phenotypes
Tuesday July 8, 2025 17:00 - 19:00 CEST
P284 Subthalamic LFP Spectral Decay Captures Movement-Related Differences Between Parkinson’s Disease Phenotypes

Luiz Ricardo Trajano da Silva1, Maria Sheila Guimarães Rocha2, Slawomir Nasuto3, Bradley Voytek4, Fabio Godinho5,Diogo Coutinho Soriano*1

1Center of Engineering, Modeling and Applied Social Sciences, Federal University of ABC (UFABC), São Bernardo do Campo, Brazil
2Department of Neurology, Santa Marcelina Hospital, São Paulo, Brazil
3University of Reading, Berkshire, United Kingdom
4Department of Cognitive Science, Halıcıoğlu Data Science Institute, University of California, San Diego, La Jolla, CA, USA
5Division of Neurosurgery, Department of Neurology, Hospital das Clínicas, University of São Paulo Medical School, São Paulo, Brazil

*Email: diogo.soriano@ufabc.edu.br
Introduction

Parkinson’s Disease (PD) is a heterogeneous neurodegenerative disorder characterized by a wide range of motor and non-motor symptoms [1]. Movement disorder specialists classify PD into subtypes, including tremor dominant (TD) and postural instability and gait disorder (PIGD) [2, 3, 4]. One promising robust biomarker for deep brain stimulation (DBS) therapy is the 1/f^chi spectral decay observed in local field potentials (LFPs). This decay has been linked to the excitatory/inhibitory synaptic balance, providing valuable insights into neuronal circuit dynamics [5, 6, 7, 8]. Therefore, this study explores changes in the spectral decay across rest and movement conditions in different PD phenotypes, aiming to advance personalized DBS strategies.
Methods
STN-LFP recordings from 35 hemispheres (15 TD, 20 PIGD) during rest and movement (elbow extension and flexion) conditions (1 minute each) were acquired during the intraoperative procedure for implanting DBS electrodes. Welch periodogram and spectral parametrization, as proposed in [5], were used to estimate the LFP adjusted low beta (13–22 Hz), high beta (22–35 Hz) rhythms bandpower (i.e., corrected by 1/f^chi background), and the spectral decay parameter chi. Mixed-ANOVA was used to evaluate differences between subtypes and rest/movement conditions. The procedure was approved by the ethical committee for research in human beings(CAAE: 62418316.9.2004.0066).
Results
(Fig. 1) shows the parametrized spectral decay for TD (A) and PIGD (B) phenotypes, the respective PSDs adjusted by 1/f^chi (panels C and D), and the box plots for the bandpower rhythms and spectral decay (E, F, G). Lower beta power showed an interaction between phenotype and motor condition (F(1,33) = 6.67, p = 0.014), with a significant decrease during movement (p = 0.003) for the TD group. High beta bandpower showed a marginal effect for phenotype during rest (F(1,33) = 3.39, p = 0.07). The spectral decay exponent also indicates an interaction between phenotype and the motor condition (F(1,33) = 5.67, p = 0.02), with a post-hoc analysis unveiling a marginal phenotype difference during movement (p = 0.088).
Discussion
Spectral parameterization revealed significant differences between the TD and PIGD subtypes, highlighting distinct neuronal dynamics in the subthalamic nucleus (STN) during movement (elbow flexion). Our findings indicate that beta-band suppression during movement, as documented in previous studies [9–12], is predominantly driven by TD patients. Conversely, the PIGD group showed increased high-beta activity, which has been linked to motor rigidity symptoms [13], along with a steeper aperiodic exponential decay, suggesting a more inhibited synaptic balance in the STN during movement. These results highlight the potential of spectral decay components as biomarkers for personalized DBS strategies for PD patients.





Figure 1. Figure 1 – Aperiodic-adjusted and Aperiodic Component PSDs and Grouped Boxplot for subtype and rest/movement conditions. A and B, aperiodic component PSD for TD and PIGD groups, respectively. C and D, Aperiodic-adjusted PSDs for TD and PIGD groups, respectively. E, F, and G, Boxplot for subtype and rest/movement conditions exhibiting mixed-ANOVA results. (.) 0.05 < 𝘱 < 0.1 ;*𝘱 < 0.05;**𝘱 < 0.01
Acknowledgements
Authors acknowledge the financial support of the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)- Finance Code 001 and CNPq (grant number 313970/2023-8).
References
1.https://doi.org/10.1038/s41582-021-00486-9
2.https://doi.org/10.1016/S0140-6736(21)00218-X
3.https://doi.org/10.1002/acn3.312
4.https://doi.org/10.1016/j.parkreldis.2019.05.024
5.https://doi.org/10.1038/s41593-020-00744-x
6.https://doi.org/10.1038/s41531-018-0068-y
7.https://doi.org/10.1016/j.neuroimage.2017.06.078
8.https://doi.org/10.1523/JNEUROSCI.2041-09.2009
9.https://doi.org/10.1016/j.expneurol.2012.05.013
10.https://doi.org/10.1093/brain/awh106
11.https://doi.org/10.1002/mds.10358
12.https://doi.org/10.1093/brain/awf135
13.https://doi.org/doi: 10.1002/mds.26759

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P285: A two-region recurrent neural network reproduces the cortical dynamics underlying subjective visual perception
Tuesday July 8, 2025 17:00 - 19:00 CEST
P285 A two-region recurrent neural network reproduces the cortical dynamics underlying subjective visual perception

Artemio Soto-Breceda1, Nathan Faivre2, João Barbosa3,4,Michael Pereira1


1.Univ. Grenoble Alpes, Inserm, Grenoble Institut Neurosciences, 38000 Grenoble, France
2.Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
3 Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
4 Institut de Neuromodulation, GHU Paris Psychiatrie et Neurosciences, Centre Hospitalier Sainte-Anne, Université Paris Cité, Paris, France
Introduction

This study aims to model the cortical activity associated with the detection of visual stimuli, as well as the subjective duration of visual percepts and associated confidence. We propose a two-region neural network model: a sensory region integrating sensory inputs and a decision region with longer integration timescales. The model is constrained by biological parameters to simulate region-dependent temporal integration and includes top-down feedback and excitation-inhibition balance to test hypotheses on the neural basis of perception.


Methods
The model consists of a recurrent rate-based neural network of excitatory (80%) and inhibitory (20%) neurons with GABA, AMPA, and NMDA synapses. The sensory region receives and integrates sensory inputs and projects to a decision region with longer integration timescales. This decision region defines whether and when a near-threshold stimulus is detected. The dynamics of the simulated neural activity in the sensory region were compared to the dynamics of neural activity consisting of local field potentials recorded using stereotaxic EEG in humans undergoing epilepsy monitoring and associated behavioral measures of detection, response times, subjective confidence and subjective duration collected with a time-reproduction task [1].
Results
The model successfully replicated key behavioral metrics. Qualitatively, simulated activity in the decision region matched high-gamma activity recorded in the anterior insula, while sensory region activity aligned with activity in the inferior temporal cortex during a face detection task. We find that, for example, temporal integration in sensory regions explains the magnitude-duration illusion, where higher intensity stimuli are perceived as longer. We also examined model predictions when altering the E/I ratio by changing the synaptic strength of NMDA receptors in either the excitatory or inhibitory population [2], or modulating the top-down feedback. We intend to test alternative models corresponding to different hypotheses on how temporal integration explains subjective aspects of perception such as duration and confidence.
Discussion
Many studies have provided computational models of perceptual decision-making. However, the neuronal mechanisms underlying the subjective aspects of perception remain poorly understood. Here, starting from a model of decision-making [3], we harness temporal properties of these subjective aspects of perception to isolate the underlying neuronal mechanism. The model is able to predict behavior in perceptual decision-making tasks. This model allows us to investigate how biological parameters such as E/I balance or top-down feedback affect behavior and cortical activity during perceptual decision-making tasks. We will interpret our findings in the context of current theories of consciousness.





Acknowledgements
-
References
● https://doi.org/10.1101/2024.03.20.585198
● https://doi.org/10.1523/JNEUROSCI.1371-20.2021
● https://doi.org/10.1016/S0896-6273(02)01092-9




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P286: Learning in Visual Cortex: Sparseness, Balance, Decorrelation, and the Parameters of the Leaky Integrate-and-Fire Model.
Tuesday July 8, 2025 17:00 - 19:00 CEST
P286 Learning in Visual Cortex: Sparseness, Balance, Decorrelation, and the Parameters of the Leaky Integrate-and-Fire Model.

Martin J. Spencer1*, Marko A. Ruslim1*, Hinze Hogendoorn2, Hamish Meffin1, Yanbo Lian1, Anthony N. Burkitt1,3

1Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria 3010, Australia
2School of Psychology and Counselling, Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia
3Graeme Clark Institute for Biomedical Engineering, University of Melbourne, Melbourne, Victoria 3010, Australia

*Equal first authors. Email:martin.spencer@unimelb.edu.au
InIntroduction:Sparseness is a known property of information representation in the cortex [1]. A sparse neural code represents the underlying causes of sensory stimuli and is resource efficient [2]. Computational models of sparse coding in the visual cortex typically use an objective function with an information maximization term and a neural activity minimization term, a top-down approach [3]. In contrast, this study trained a spiking neural network using Spike-Timing-Dependent Plasticity (STDP) learning rules [4]. The resulting sparseness, decorrelation, and balance in the network was then quantified; a bottom-up approach [5]. To confirm the mechanisms of sparseness, results were replicated across 3 models of increasing complexity.
Methods:A biologically grounded V1 model was made up of separate populations of excitatory and inhibitory Leaky Integrate and Fire (LIF) neurons with all-to-all connectivity via delta-current synapses. Input was provided by Poisson neurons with spike rates representing the output of separate ON and OFF neurons calculated using a centre-surround whitening filter applied to natural images.
The V1 LIF neuron spike rates were maintained at a target rate using a homeostatic threshold adjustment. Synaptic weights were adjusted using a triplet STDP rule [4] for the excitatory-excitatory neuron synapses and a symmetric STDP rule for other connections. Learning was normalised using subtractive normalisation and multiplicative normalisation.
Results:Training was performed using 1200 batches of 100 natural image patches for 400 ms each (~11 hours). There were 512 LGN neurons (256 ON, 256 OFF) and 500 V1 neurons (400 Excitatory, and 100 Inhibitory). The network was found to achieve a sparse representation. The level of sparseness was found to depend on the parameters of the LIF model. These mechanisms were additionally explored in a simple single neuron model and computationally efficient smaller model (Figure 1). Decorrelation was observed to result from the weights chosen by STDP. ‘Loose’ and ‘tight’ balance was confirmed using comparison of the relative strength of excitatory and inhibitory input.
Discussion:In the biologically grounded V1 model the results showed that the balance was maintained across long (~1 s) and short (~10 ms) times scales. Where pairs of neurons had receptive fields with high correlations it was found that there was correlated high mutual inhibition leading to diversity and information maximization in the network.
In all 3 models, higher sparseness (ς) was caused by lower output spike rates in the LIF neurons (Figure 1 A and C, efficient model). In the efficient and biologically grounded models this was associated more Gabor-like receptive fields (Figure 1 B and D). Other parameters of the LIF model were also examined, including membrane time constant, input spike rate and number of inputs.



Figure 1. Figure 1: (A) Sparseness (ς) measured in the computationally efficient V1 neuron model of 64 neurons with a 5 Hz target mean spike rate. (B) Associated normalised synaptic weight weights to 9 V1 neurons from the ON (red) and OFF (blue) input neurons. (C-D) 30 Hz target mean spike rate.
Acknowledgements
Acknowledgements

This work was supported by an Australian Research Council Discovery Grant (DP220101166).
References
References
[1] - https://doi.org/10.1038/s41467-020-14645-x
[2] - https://doi.org/10.1038/srep17531
[3] - https://doi.org/10.1038/381607a0
[4] -https://doi.org/10.1523/JNEUROSCI.1425-06.2006
[5] - https://doi.org/10.1101/2024.12.05.627100
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P287: Pulsatile and direct current stimulation in functional network model of decision making
Tuesday July 8, 2025 17:00 - 19:00 CEST
P287 Pulsatile and direct current stimulation in functional network model of decision making

Cynthia Steinhardt*1,2,Paul Adkisson3, Gene Fridman3
1 Simons Society of Fellows, Junior Fellow, New York, New York 10010
2 Center for Theoretical Neuroscience, Zuckerman Brain Science Institute, Columbia University, New York, New York 10027
3 Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland 21287
*Email: cs4248@columbia.edu

Introduction
Pulsatile stimulation has been the primary method for neural implants in sensory restoration and neuropathology treatment (e.g., Parkinson’s, epilepsy) since the first neural implant [1]. Recently, non-invasive transcranial direct/alternating current (DC/AC) stimulation has gained interest, offering broader accessibility without surgery. However, to be viable, effective non-invasive alternatives must match or exceed the efficacy of implants in modulating neural circuits. Pulsatile and DC stimulation effects in complex networks have not been directly compared due to the need for detailed biophysical models. We address this gap.
Methods
Our prior work showed that pulsatile stimulation alters firing patterns in single neurons in complex ways depending on pulse parameters and spontaneous activity [3]. Similarly, we modeled and characterized the effects of DC stimulation on single neurons [4]. Here, we extend these models, modifying linear-integrate-and-fire (LIF) models to include approximations of these effects so that we can accurately simulate local stimulation in a 1000-neuron network. We simulate pulsatile and DC stimulation at equivalent local dosing levels and at behaviorally equivalent levels and compare network effects in a winner-take-all decision-making circuit for motion detection.
Results
The network processes moving dots and determines whether the majority are moving left or right. We identified pulse rates for suprathreshold pulses that match DC stimulation’s effects on the firing rate in the left-motion detection part of the network. At this level, pulsatile stimulation induced a stronger, faster bias toward leftward decisions. When matched for behavioral bias, pulsatile stimulation resisted feedback inhibition and had conflicting effects with recurrent feedback. DC stimulation, in contrast, propagated through the network more strongly due to recurrent excitation but was more affected by feedback inhibition [5].
Discussion
This study provides the first direct comparison of how pulsatile and DC stimulation influence network activity up to the behavioral-level, using accurate approximations of electrical stimulation. We show that these two forms of stimulation interact differently with network dynamics, suggesting different therapeutic applications. Additionally, we present open-access tools for modeling, which could enhance patient-specific disease models. These tools allow for mechanistic insights beyond the LIF and threshold models currently used.



Acknowledgements
Acknowledgments
We thank the Simons Society of Fellows (965377), Gatsby Charitable Trust (GAT3708), Kavli Foundation, and NIH (R01NS110893) for support.
References
References
1.Loeb, G. E. (2018). Neural Prosthetics.Appl Bionics Biomech,2018, 1435030.
2.Giordano, J., et al. (2017). Mechanisms of tDCS.Dose-Response,15(1), 1559325816685467.
3.Steinhardt, C. R., et al. (2024). Pulsatile stimulation disrupts firing.Nat Commun,15(1), 5861.
4.Steinhardt, C. R., & Fridman, G. Y. (2021). DC effects on afferents.iScience,24(3).
5.Adkisson, P. W., et al. (2024). Galvanic vs. pulsatile effects.J Neural Eng,21(2), 026021.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P288: Modelling the temperature profile of the retina in response to nanophotonic neuromodulation
Tuesday July 8, 2025 17:00 - 19:00 CEST
P288 Modelling the temperature profile of the retina in response to nanophotonic neuromodulation

Daniel B Richardson1, James Begeng1, Paul R Stoddart1,Tatiana Kameneva*1,2




1Department of Engineering Technologies, School of Engineering, Swinburne University of Technology, Australia




2Iverson Health Innovation Institute, Swinburne University of Technology, Australia


*Email: tkam@swin.edu.au



Electrical stimulation of neurons has been used as a reliable technique to elicit action potentials in implantable devices. Recently, novel optical stimulation techniques have been developed as alternatives to electrical stimulation. One approach involves applying near infrared wavelengths of light to stimulate neurons. Neurophotonic stimulation may increase the resultant visual acuity compared to electrical stimulation as it does not apply any current and thus has no current spread. As a result of applying nanophotonic stimulation, the retina experiences an increase in temperature. For this reason, modelling the temperature profile within the retina is vital in testing the feasibility of optical stimulation techniques.
Step 1: To model the temperature profile in a retina environment, a Monte Carlo simulation was implemented in MATLAB. The environment consisted of four layers: water, gold nanorods, retinal tissue, and a layer of glass. A 750nm beam was used to simulate near infrared stimulation at varying powers that matched the experimental values of Begeng et al (2023). Each layer had specified coefficients obtained from literature, which included the absorption and scattering coefficients, scattering anisotropy, volumetric heat capacity, and thermal conductivity. The simulation models the temperature profile through finite element modelling of the defined geometry. It determines the temperature through tracking the photon paths of the stimulation beam, monitoring how it progresses through the tissues via their varying scattering coefficients and refractive indexes. It then models the florescence and absorption of the tissues through probabilistic determination. Theamountof photons absorbed, and its associated power, is then used in conjunction with the heat equation to determine the temperature.
Step 2: A single-compartment Hodgkin-Huxley model of a temperature-sensitive rat RGC was constructed in the NEURON simulation environment. The model uses the Gouy-Chapman-Stern theory of temperature-variant bilayer capacitance, and experimentally-derived temperature dependence for key sodium, potassium, calcium and leak ion channels, as well as cytosolic resistance. Thermal profiles for the pulse durations were approximated using the thermal model described in Step 1.

The simulation temperature model demonstrated general agreement with the experimental results, showing comparable peak temperatures and maintaining a consistent trend with the varying pulse durations. Furthermore, the proposed temperature model allows for estimation of the temperature profile on the retinal surface, which is difficult to measure experimentally. Hodgkin-Huxley model replicated the main features of nanophotonic stimulation, including an initial driving subthreshold depolarisation hump, followed by an action potential, inhibition and excitation phenomena, that were dependent on the pulse duration.





Acknowledgements
-
References
Begeng JM, Tong W, Rosal B, Ibbotson M, Kameneva T, Stoddart PR (2023) Activity of retinal neurons can be modulated by tunable near-infrared nanoparticle sensors, ACS Nano 17 (3), 2079 – 2088,doi/10.1021/acsnano.2c07663
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P289: When Can Activity-Dependent Homeostatic Plasticity Maintain Circuit-Level Dynamic Properties with Local Activity Information?
Tuesday July 8, 2025 17:00 - 19:00 CEST
P289 When Can Activity-Dependent Homeostatic Plasticity Maintain Circuit-Level Dynamic Properties with Local Activity Information?

Lindsay Stolting*1, Randall D. Beer1

1Cognitive Science Department, Indiana University, Bloomington, IN, USA

*Email: lstoltin@iu.edu

Introduction

Neural circuits are remarkably robust to perturbations that threaten their function. One mechanism behind this robustness is activity-dependent homeostatic plasticity (ADHP), which tunes neural membrane and synaptic properties to ensure moderate and sustainable average activity levels [1]. The dynamics of behaving neural circuits, however, must often satisfy stricter requirements than just reasonable activity levels. For instance, successful behavior may require a specific temporal structure or phasing relationships between neurons–properties which cannot be specified by time-averaged activity information at the single-neuron level. How, then, does ADHP maintain such properties?

Methods
We explored this question in a computational model of the crustacean pyloric pattern generator, which exhibits a triphasic burst rhythm [2]. We stochastically optimize 100 continuous time recurrent neural networks to match pyloric burst ordering, then add ADHP to these models by placing two network parameters under homeostatic control. These parameters are tuned according to the temporally averaged activity of the corresponding neuron, relative to some target range [3]. The averaging window and target range are stochastically optimized 10 times for each pyloric network, with the goal of parameterizing an ADHP mechanism that recovers pyloricness after perturbation of controlled parameters.
Results
This results in a data set of ADHP mechanisms which maintain pyloricness in a degenerate set of pyloric network models to varying degrees of success. Though there are typically no true fixed points in these models, we find we can leverage timescale separation assumptions to predict asymptotic parameter configurations. We can then derive general conditions for ADHP’s success, according to whether homeostatic endpoints are also pyloric (Figure 1). More generally, we can predict for any individual pyloric network the range of homeostatic mechanisms that successfully maintain it, and validate these predictions with numerical simulation.
Discussion
Even though temporally defined properties like pyloricness cannot be directly specified by average activity levels, they can be maintained by activity-dependent homeostatic plasticity under specific conditions. To define these conditions, one must consider the set of perturbations with which the circuit may contend, in conjunction with the dynamic properties of the homeostatic mechanism itself. This work therefore suggests several avenues for experimental investigation, where responses to perturbation provide clues about homeostatic mechanisms, and knowledge of homeostatic mechanisms predicts responses to perturbation.




Figure 1. Differently parameterized ADHP mechanisms differentially recover pyloricness in a model circuit. ADHP endpoints are predicted by the overlap between target activity levels and average activity of regulated neurons. The intersection of these pseudo-nullclines may lie in or outside the pyloric region (black), resulting in successful (green), conditionally successful (yellow), or failing (red) ADHP.
Acknowledgements

This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
References
[1] Turrigiano, G. (1999). Homeostatic plasticity in neuronal networks: The more things change, the more they stay the same.Trends in Neurosciences,22(5), 221–227.https://doi.org/10/frf24n
[2] Harris-Warrick, R. M. (Ed.). (1992). Dynamic biological networks: the stomatogastric nervous system. MIT press.
[3] Williams, H. (2005). Homeostatic plasticity improves continuous-time recurrent neural networks as a behavioural substrate. Proceedings of the International Symposium on Adaptive Motion in Animals and Machines, AMAM2005. Ilmenau, Germany: Technische Universität
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P290: Rate-based versus spike-triggered contributions in spike-timing–dependent synaptic plasticity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P290 Rate-based versus spike-triggered contributions in spike-timing–dependent synaptic plasticity

Jakob Stubenrauch*1,2, Benjamin Lindner1,2

1Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany
2Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany

*Email: jakob.stubenrauch@rwth-aachen.de
IntroductionSpike-timing-dependent plasticity (STDP) has long been proposed as a phenomenological model class for synaptic learning [1], yet most theoretical frameworks of learning reduce plasticity to effectively rate-based descriptions. The short window of spike-pairs contributing to STDP of around 20ms [1] however points to the relevance of precise post-synaptic spike-responses. We investigate this timing-sensitive aspect of plasticity by dissecting synaptic dynamics into two contributions: spike-pairs that fall into the STDP window by rate-dependent coincidence versus those occurring through direct causation—a crucial distinction that reflects fundamentally different learning mechanisms.

MethodsWe develop a theoretical framework for the drift and diffusion of synaptic weights under STDP. We leverage established results [2,3] on the response of leaky integrate-and-fire (LIF) neurons, mean field theory of spiking networks [4], and recent advances in shot-noise theory [5]. Specifically, we derive a Langevin equation that describes the stochastic evolution of synaptic weights. This framework naturally subdivides the synaptic dynamics into rate-based and correlated contributions. The theory is applied to synapses that deliver Poissonian spikes into a recurrent network of LIF neurons, for which it captures per-realization the population mean and variance of the weights. The theory is tested against simulations of spiking neurons.
ResultsOur analysis quantifies and dissects the dynamics of synaptic weights. The contribution from correlated response—neglected in effectively rate-based descriptions—increases with the mean synaptic weight and becomes significant even at modest weights where ~20 concurrent input spikes are needed to reliably elicit action potentials. We apply the theory to characterize a supervised training paradigm mimicking memory consolidation. In this paradigm, the drift and diffusion derived by the theory capture the encoding strength and decay of memory traces and, more importantly, manage to attribute these to rate-based and correlation-dependent contributions, respectively.

DiscussionThe precise response of spiking neurons matters for plasticity if synaptic weights are large enough. As we demonstrate, this effect can have a large impact on the success or failure of associative learning. Based on our work, it is thus possible to judge under which circumstances STDP’s strong tuning to closely succeeding spikes is important. Correspondingly, when capturing the rate-based effects of STDP, one may overlook crucial aspects of learning. Future research should extend this approach to different neuron models, network architectures, and training paradigms. Results should be tested experimentally. Last, it would be of high interest to extend the framework to multiple populations and to recurrent plasticity.




Acknowledgements
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation),
SFB1315 B01, Project-ID No. 327654276 to B. L.
References
1.https://doi.org/10.1523/jneurosci.18-24-10464.1998
2.https://doi.org/10.1103/PhysRevLett.86.2186
3.https://doi.org/10.1103/PhysRevLett.86.2934
4.https://doi.org/10.1023/A:1008925309027
5.https://doi.org/10.1103/PhysRevX.14.041047
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P291: Weighted sparsity regularization for solving the inverse EEG problem: a case study
Tuesday July 8, 2025 17:00 - 19:00 CEST
P291 Weighted sparsity regularization for solving the inverse EEG problem: a case study

Ole Løseth Elvetun1 , Niranjana Sudheer*2

1Faculty of Science and Technology, Norwegian University of Life Sciences, P. O Box 5003, NO - 1432, Ås, Norway


2Faculty of Science and Technology, Norwegian University of Life Sciences, P. O Box 5003, NO - 1432, Ås, Norway

*Email: niranjana.sudheer@outlook.com

IntroductionWe present weighted sparsity regularization for solving the inverse EEG problem, which helps in the recovery of dipole sources while reducing depth bias. EEG is a non-invasive technique for monitoring cerebral activity. However, it suffers from ill-posed inverse problems due to weak signals from deep sources. Common standard regularization methods solutions have been suggested to tackle this problem but has significant spatial dispersion. This study proposes a redundant basis approach combined with a weighted sparsity term to improve the recovery and lower spatial dispersion, while reducing the depth bias.
MethodsOur approach is based on theoretical results established in previous studies, but modifications are required to align with the classical EEG framework [1,2]. Generally, any dipole at a particular location can be expressed as a combination of three basis dipoles with independent orientations. We will illustrate that employing more than three dipoles, specifically a redundant basis or frame, can enhance localization accuracy. We produce simulated event-related EEG data utilizing SEREEGA [3], an open-source MATLAB toolbox with 64, 131, and 228 electrode channels. Simulations with three different dipole orientations, such as fixed, limited, and free, are conducted, and performance is analyzed using dipole localization error (DLE), spatial dispersion (SD), and Earth Mover’s Distance (EMD) [3].
Results & DiscussionThe proposed method performs better than sLORETA and Lp-norm approaches with lower DLE values and reduced spatial dispersion. The frame-based methodology guarantees effective recovery of dipoles, especially in noise-free environments. We noticed that an increase in the number of frame dipoles resulted in reduced localization errors. The localization accuracy improves when the number of EEG channels is increased, particularly in the limited orientation setup. A real-world test using EEG Motor Movement data [4,5] showed the practical application of this approach.
ConclusionWeighted sparsity regularization provides an effective approach to EEG inverse problems, enhancing dipole localization and minimizing depth bias. The method is effective for various dipole orientations and adaptable for real-world applications.





Acknowledgements
I would like to thank my supervisor Ole Løseth Elvetun and co - supervisor Bjørn Fredrik Nielsen for providing guidance and support throughout the research. I am also grateful to my friends and family for their encouragement and support.
References1.https://doi.org/10.1515/jiip-2021-0057
2.https://doi.org/10.1090/mcom/3941
3.https://doi.org/10.1016/j.jneumeth.2018.08.001
4.https://doi.org/10.1109/TBME.2004.827072
5.https://doi.org/10.1161/01.CIR.101.23.e215









Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P292: Single-trial detection of lambda responses in free-viewing EEG measurements
Tuesday July 8, 2025 17:00 - 19:00 CEST
P292 Single-trial detection of lambda responses in free-viewing EEG measurements

Iffah Syafiqah Suhaili*1, Zoltan Nagy1,2, Zoltan Juhasz1
1Department of Electrical Engineering and Information Systems, University of Pannonia, 8200 Veszprem, Hungary
2Heart and Vascular Centre, Semmelweis University, 1085 Budapest, Hungary

*Email: ssiffah@phd.uni-pannon.hu
Introduction

Visual lambda responses are occipital activations evoked by saccadic eye movements. Their study is important for understanding visual processing during natural viewing conditions. Traditionally, lambda waves are detected by averaging many short epochs in which lambda responses are phase-locked to stimulus. In natural viewing conditions, especially in experiments where trials span many seconds, their detection is difficult, and averaged ERP/based methods are not applicable as saccades occur in an unpredictable, non-time-locked manner. This study presents a novel method that can detect individual lambda responses in single trials without averaging, allowing for more naturalistic experimental designs.


Methods
80 art paintings were presented to 29 healthy volunteers. Each painting was displayed for 8 seconds in a random order, each followed by a 4-second blank screen. 128-channel EEG data were registered using a Biosemi ActiveTwo EEG device. Participants were instructed to explore the painting and then respond by pressing a LIKE or DISLIKE button. After high-pass (1 Hz) and low-pass (40 Hz) filtering, the signals were decomposed into independent components using the Infomax Independent Component Analysis (ICA) method[1]. Simultaneously, eye movements were recorded with a Tobii Pro Fusion eye-tracker at 250 Hz sampling rate. As the final step of the pre-processing, the EEG and eye-tracking data were synchronized.

Results
Besides the usual eye-related artefact components (horizontal and vertical eye movements), ICA decomposition produced a characteristic component displaying a distinct, rhythmic pulse-train pattern during the 8-second viewing period that diminished in the 4-second blank interval. The location of this brain source component was on the parieto-occipital electrodes (Pz –Oz). Overlaying the eye-tracking events (saccade onset and offset) on the ICA activation plot clearly shows that the pulses are time-locked to the saccade offsets, with an average latency of 82 ms. Fig. 1 illustrates these findings in detail.

Discussion
ICA can reliably detect saccade-related lambda waves in free-viewing experiments lasting at least 15 minutes. This method helps determine the number and temporal distribution of saccades characterizing perceptual behaviour (e.g. engagement, attention) in natural viewing experiments. Lambda wave properties (peak amplitude, peak latency, inter-peak distance) allow further quantitative analysis and can act as synchronization markers in segmenting sessions into saccade-evoked epochs locked to lambda peaks. Identifying the lambda component improves eye-movement artefact removal by including parieto-occipital activations. We hope this method will lead to new experimental approaches that advance our understanding of the human visual system.






Figure 1. ICA results highlighting saccade-related lambda waves. a) ICA activation plot of two stimulus-locked paintings (epoch 22 and 23) highlighting the lambda response component (IC 3) occur only during the stimulus presentation. b) Scalp topography map of IC 3 over parieto-occipital region. c) A zoomed-in single trial segment circled in (a), displaying three lambda peaks aligned with saccade events.
Acknowledgements
This research was funded by the University Research Fellowship Programme (EKOP) (Code: 2024-2.1.1-EKOP-2024-00025/58) of Ministry of Culture and Innovation from the National Fund for Research, Development and Innovation.
References
1.Lee, T.-W., Girolami, M., & Sejnowski, T. J. (1999). Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources.Neural Computation,11(2), 417–441. Neural Computation. https://doi.org/10.1162/089976699300016719
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P293: Numerical Analysis of AP Propagation Parameter Thresholds Under Varied Space and Time Discretization
Tuesday July 8, 2025 17:00 - 19:00 CEST
P293 Numerical Analysis of AP Propagation Parameter Thresholds Under Varied Space and Time Discretization

Lucas Swanson*1, Erin Munro Krull1, Laura Zittlow1

1Mathemarical Sciences Department, Ripon College, Ripon, WI, US

*Email: swansonl@ripon.edu
Introduction

There is a lot known about how discretization affects the numerical solution to PDEs. However, little is known about how these discretization affects finding a parameter threshold for a PDE. In particular, there is the sodium conductance propagation threshold (gNaT), which is the the threshold for AP propagation when varying g̅_Na. Preliminary results show that this threshold, if known for simple morphologies, may be used to predict the gNaT of other, more complex morphologies.
Methods
We modeled cells using the Hodgkin-Huxley type model with parameters for a rat neocortical L5 pyramidal cell axon [1], on the NEURON software. Using a binary search, we were able to calculate the gNaT of any morphology from a given stimulus to a given AP propagation test site. We explored the effects of the discretization parameters, for time and space,dtanddx, on the gNaT of 10 randomly generated morphologies. We varieddtfrom 2⁻⁸ms to 2⁻⁵ms, anddxfrom 2⁻⁸λto 2⁻⁴λ, whereλis the electrotonic length.
Results
Our results show that increaseddtleads to increased gNaT values, regardless of morphology anddx; and that increasingdxcan cause gNaT values to diverge sporadically, especially in morphologies with short, or tightly spaced branches.
Discussion
Further investigation should be done to find the true nature ofdx’s effects on gNaT, since the sporadic divergence of gNaT seen in our results could be attributed to the locations of branches being re-discretized, and/or short branches having significantly different behaviors. That is, our results show that the accuracy of calculated parameter thresholds may be linked to morphology.




Acknowledgements
I would like to thank the faculty of the Ripon College math department, which includes my mentor for this project, Professor Erin Munro Krull, all of whom gave me advice and counsel. I would also like to thank the organizers of Ripon College's Summer Opportunities for Advanced Research (SOAR) program, as well as the many donors of the college who helped fund the program.
References
● Traub, R. D., Contreras, D., Cunningham, M. O., Murray, H., LeBeau, F. E., Roopun, A., Bibbig, A., Wilent, W. B., Higley, M. J., & Whittington, M. A. (2005). Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts.Journal of neurophysiology,93(4), 2194. doi.org/10.1152/jn.00983.2004


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P294: Inhibitory-Targeted Plasticity in Developing Thalamocortical Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P294 Inhibitory-Targeted Plasticity in Developing Thalamocortical Networks

Matthew P. Szuromi*1,2, Gabriel K. Ocker2,3

1Graduate Program for Neuroscience, Boston University, Boston, USA
2Department of Mathematics and Statistics, Boston University, Boston, USA
3Center for Systems Neuroscience, Boston University, Boston, USA

*Email: mszuromi@bu.edu

Introduction

The maturation of thalamocortical (TC) afferents is a key feature of critical periods (CPs) for primary sensory cortices [1]. Bienenstock-Cooper-Munro (BCM) theory for synaptic plasticity has been effective in describing thalamic projections onto pyramidal (Pyr) neurons in layer 4 (L4) of primary visual cortex (V1) [2]. However, these models often consider only a homogeneous population of cortical neurons, neglecting the recurrent connectivity within cortex and the various cell types innervated by TC axons, such as parvalbumin+ (PV+) interneurons. To address this, we develop an excitatory-inhibitory thalamocortical network model equipped with triplet BCM spike-timing-dependent plasticity (STDP) and rigorously describe its dynamics.
Methods
Our model comprises three neuronal populations: cortical excitatory (E), cortical inhibitory (I), and thalamic (X), the latter of which can have correlated spike trains. Neurons are modeled as a mutually exciting Hawkes process [3]. We examine systems whereXtoE,XtoI, andEtoIsynapses can be plastic and update according to a triplet BCM STDP rule [4, 5]. Using standard separation of timescales, we derive dynamics for the mean interpopulation synaptic weights in terms of moments of the neural activity, calculated by the path integral formalism [6, 7, 8, 9]. We then apply numerical methods to assess how parameters (static weights, correlations, and STDP parameters) affect the stability and strength of the interpopulation weights.
Results
When only TC synapses are plastic, TC weights strengthen in response to increased thalamic correlations. Further, we find that corticocortical inhibition must be sufficiently strong (i.e., the ratio of the meanItoEweight to the meanEtoIweight must be sufficiently large) for bothXtoEandXtoIweights to stabilize at nonzero values. Additionally, we analyze the network whenEtoIsynapses are also plastic. We determine how parameters of the STDP rule and the network influence the trajectories and equilibria of the synaptic weight dynamics in response to varied thalamic correlations. Particularly, we describe regimes where the trajectory of the meanEtoIweight either mimics or opposes the trajectories of the TC weights.
Discussion
In this work, we extend models using triplet BCM STDP to excitatory-inhibitory networks. In L4 of V1, inhibitory synapses from PV+ interneurons onto Pyr neurons strengthen prior to the CP [10]. Our results suggest a possible explanation: strong inhibitory synapses are necessary for TC synapses to potentiate and stabilize. Experiments have also indicated that during the CP for V1, visual deprivation induces simultaneous TC depression and potentiation of Pyr to PV+ synapses [11]. Our results describe parameter regimes where this phenomenon can occur, suggesting potential plasticity rules for synapses onto PV+ cells during the CP.



Acknowledgements
M.P.S. acknowledges the Neurophotonics Center at Boston University for their support.
References
1. https://doi.org/10.1016/j.neuron.2020.01.031
2. https://doi.org/10.1038/381526a0
3. https://doi.org/10.1093/biomet/58.1.83
4. https://doi.org/10.1523/JNEUROSCI.1425-06.2006
5. https://doi.org/10.1073/pnas.1105933108
6. https://doi.org/10.1103/PhysRevE.59.4498
7. https://doi.org/10.1093/cercor/bhy001
8. https://doi.org/10.1371/journal.pcbi.1005583
9. https://doi.org/10.1103/PhysRevX.13.041047
10. https://doi.org/10.1523/JNEUROSCI.2979-10.2010
11. https://doi.org/10.7554/eLife.38846
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P295: How baseline activity determines neural entrainment by transcranial alternating current stimulation (tACS) in recurrent inhibitory-excitatory networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P295 How baseline activity determines neural entrainment by transcranial alternating current stimulation (tACS) in recurrent inhibitory-excitatory networks

Saeed Taghavi*1,2, Gianluca Susi1, Alireza Valizadeh1,2,Fernando Maestú1

1Zapata-Briceño Institute of Neuroscience, Madrid, Spain
2Physics Department, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran

*Email: saeed.taghavi.v@gmail.com


Introduction
Neuronal oscillations play a key role in cognition and can be modulated by transcranial alternating current stimulation (tACS). However, the mechanisms underlying network-level entrainment remain unclear. We investigate how a balanced excitatory-inhibitory network of adaptive exponential integrate-and-fire neurons responds to sinusoidal stimulation. We analyze phase-locking to determine how external rhythmic inputs influence neural synchronization at different baseline network states.
Methods
We simulate a recurrent EI network that receives Poisson-distributed background input. Three baseline synchronization levels are studied, reflecting the degree of natural synchronization in neuronal activity within the network before any external stimulation is applied. Additionally, tACS-like stimulation is applied at frequencies ranging from 5 to 60 Hz with five different amplitudes (3, 4, 6, 8, and 10). Each condition is repeated over nine trials to ensure reliability. To quantify network entrainment, we compute the phase-locking value between the population activity and the stimulation. Furthermore, we calculate the spike-field coherency of individual neurons and measure changes in SFC with and without stimulation to assess how neuronal firing aligns with the external signal.
Results
Our results show that baseline network synchrony strongly influences entrainment. Networks with higher intrinsic synchrony exhibit stronger phase locking with the stimulation. When the stimulation frequency is close to the endogenous frequency, PLV increases with stimulation amplitude, suggesting that stronger inputs enhance entrainment only when the stimulation frequency matches the endogenous frequency. Frequency-dependent effects emerge, with the most robust responses occurring near the network’s intrinsic oscillation frequency. Individual neurons display varying phase coherence, with some aligning strongly to the stimulation while others remain weakly affected.
Discussion

We discovered that tACS-induced neural entrainment behaves in a way that challenges conventional expectations. While you might assume higher baseline synchrony leads to broader entrainment, we found the opposite. Networks with low baseline synchrony actually exhibit broader locking across a wider range of external frequencies. Conversely, highly synchronized networks show stronger locking, but it's tightly confined to the baseline frequency's vicinity. This counterintuitive result underscores the delicate balance between baseline synchrony and tACS effectiveness, highlighting the need for nuanced approaches in cognitive and therapeutic applications.



Figure 1. (a) Entrainment of population activity to tACS varies with network synchrony and stimulation strength. Higher synchrony or amplitude increases peak PLV but narrows the high-PLV region. (b) Stimulation does not significantly alter firing rates but enhances phase coherence. (c) Spike phase coherence changes shows a peak when stimulation matches network frequency.
Acknowledgements


References
[1]https://doi.org/10.1016/j.heliyon.2024.e41034
[2]https://doi.org/10.1101/2023.05.19.541493
[3]https://doi.org/10.1016/j.neuroimage.2022.118953
[4]https://doi.org/10.3390/biomedicines10102333
[5]https://doi.org/10.3389/fnsys.2022.827353



Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P296: The modelling of the action potentials in myelinated nerve fibres
Tuesday July 8, 2025 17:00 - 19:00 CEST
P296 The modelling of the action potentials in myelinated nerve fibres
K. Tamm 1*, T. Peets 1, J. Engelbrecht 1,2

1Tallinn University of Technology, Department of Cybernetics, Tallinn, Estonia
2Estonian Academy of Sciences, Tallinn, Estonia
*kert.tamm@taltech.ee


Introduction. The classical Hodgkin-Huxley (HH) model [1] describes the propagation of an axon potential (AP) in unmyelinated axons. In many cases, the axons have a myelin sheath. A theoretical model is proposed describing the AP propagation in myelinated axons drawing inspiration from the studies of Lieberstein [2], who included the possible effect of inductance. The Lieberstein-inspired model (in the form of coupled partial differential equations (PDEs)) can describe all the essential effects characteristic to the formation and propagation of an AP in an unmyelinated axon. Then a phenomenological model for a myelinated axon is described including the influence of the structural properties of the myelin sheath and the radius of an axon.

Methods. The model equations are solved numerically by making use of the pseudospectral method (PSM) [3]. Briefly, the main point of PSM is that the discrete Fourier transform (DFT) can be used to approximate space derivatives reducing, therefore, the PDE to an ordinary differential equation (ODE) and then to use standard ODE solvers for integration in time. Here the ODE solver is used through its NumPy (a Python package) implementation. The parameters for the model are collected from experiments (most of them from the classical HH paper [1]) or estimated separately based on experimental observations.

Results. Using the parameters from the experiments we investigated the numerical solutions of the noted model for the unmyelinated axon and demonstrated that the behaviour of the solutions is in the physiologically plausible range and the key characteristics of the nervous signalling are fulfilled (annihilation of counter-propagating signals, threshold, refractory period). The model includes the structural properties of the myelin sheath: the μ-ratio (longitudinal geometry) and g-ratio (perpendicular geometry). The key difference between the classical HH model and the Lieberstein-inspired model used here is that the mechanism for signal propagation along the axon emerges like a wave as a consequence of opting to keep the inductivity.

Discussion. The goal of constructing yet another equation for describing the AP propagation along the axon is clearer physical interpretation as we start from the elementary form of Maxwell equations which is modified to include the influence of myelination on the signal propagating along the axon. It is important to stress that the proposed continuum-based model is philosophically similar to how the transmission line equations are composed. The ‘unit-cell’ in the context of the myelinated axon in the model is composed of the node of Ranvier and the myelinated section next to it. Having a pair of PDEs with a straightforward connection to underlying physics could be useful for investigating causal connections in the context of nerve signalling.


Acknowledgements
This research was supported by the Estonian Research Council (PRG 1227). Jüri Engelbrecht acknowledges the support from the Estonian Academy of Sciences.
References
[1] https://doi.org/10.1113/jphysiol.1952.sp004764
[2] https://doi.org/10.1016/0025-5564(67)90026-0
[3] https://doi.org/10.1007/978-3-030-75039-8


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P297: Autonomous Generation of Neuronal Connection by Axon Guidance Factors
Tuesday July 8, 2025 17:00 - 19:00 CEST
P297 Autonomous Generation of Neuronal Connection by Axon Guidance Factors

Atsuya Tange*1, Shun Ogawa1, Minoru Owada1, Yuko Ishiwaka1


1SoftBank Corp., Tokyo, Japan

*Email: atsuya.tange01@g.softbank.co.jp

Introduction

Humans’ robust cognitive and linguistic functions emerge from intricate connections among numerous neurons in the neocortex. To implement machine learning model performing such a highly cognitive tasks, clarifying theoretically mechanism for generating such a network connection is an important challenge. To address this challenge, we propose an autonomous neuron connection model inspiring biological neuronal growth mechanisms [1]. This work focuses on axon elongation and creation of the synaptic connections between source and target neurons. We believe that this approach will improve our understanding of neural connectivity and it creates the brain’s initial state for life-long learning [2].

Methods
An implemented model has two types of axon guidance factor sources, the attractant and repellant ones, in a 2D space. The model is based on the self-propelled particles (SPPs) [3] and the XY model [4]. The growth direction is along the gradient of extracellular guidance factors and it subjects to the noise. A tip of the axon is regarded as an SPP and it observes only local information such as the concentration itself and its gradient around them. This process includes axon branching. The axon branching occurs with probability and it creates another SPP. They move along the gradient field avoiding each other to prevent the axon overlapping. They finally reach the dendrites of the target cells and create the synapse.
Results
Figure 1 exhibits snapshots of the simulation result. The black, green, and red circles are source neurons, target neurons and repulsion sources, respectively. The axon branch of each source neuron succeeds in finding the dendrites of the target neurons under the environment.
The simulations are performed by using Python (Fig. 1) and our original brain simulation framework, Bramuwork (P48 in [5]), and obtain similar results. Bramuwork organizes graphs in a database. The node stores attributes and methods (programs) that define its dynamics. The edge connects nodes, arranging the network connectivity and supporting the hierarchical structure. Nodes and edges are used for modeling somas, dendrites, axons, and repellent factor sources.
Discussion
We note that the SPPs in the model can only observe local information, such as the density and its gradient of the chemical substances, and do not use global information. Up to now, the gradient field induced by the chemical substances does not depend on time. But the diffusion or transfer occurs during the process; it may impact the created neuronal network. We have to include these phenomena without violating causality. The proposed 2D model could be generalized to a 3D one by replacing the XY spin interaction with the spherical ones.

Bramuwork enables us to modify and examine models during running time. Neurons and connections can be created and deleted during simulation, and users can search and extract subsets of data for analysis.



Figure 1. Axons elongation under axon guidance environment
Acknowledgements

References
[1] https://doi.org/10.1126/science.274.5290.1123
[2] https://doi.org/10.1038/s42256-022-00452-0
[3] https://doi.org/10.1103/PhysRevLett.75.1226
[4] https://doi.org/10.1093/acprof:oso/9780199577224.001.0001
[5] https://doi.org/10.1007/s10827-022-00841-9
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P298: The important role of astrocytic Na+ signaling in the astrocyte-neuron communication
Tuesday July 8, 2025 17:00 - 19:00 CEST
P298 The important role of astrocytic Na+ signaling in the astrocyte-neuron communication

Pawan K Thapaliya1, Alok Bhattarai1, and Ghanim Ullah1,*
1Department of Physics, University of South Florida, Tampa, FL 33620, USA.
*Email: gullah@usf.edu

Introduction

Emerging evidence indicates that neuronal activity-evoked changes in Na+concentration in astrocytes ([Na]a)represent a special form of excitability, which is tightly linked to all other major ions in the astrocyte and extracellular space, as well as to bioenergetics, neurotransmitter uptake, and neurovascular coupling. Furthermore, [Na]aexhibits significant heterogeneity at the subcellular, cellular, and brain region levels.

Methods
We develop biophysical models to determine how [Na]acan regulate astrocytic function. We further investigate what does the spatial heterogeneity of [Na]aat different scales mean for the astrocyte-neuron communication. Our models are supported by extensive data imaging Na+ signals in astrocytes under different conditions.
Results
Our work highlights the importance of [Na]ain almost every aspect of astrocytic function. For example, we have shown that the observed brain-region specific heterogeneity in [Na]asignaling leaves cortical astrocytes more susceptible to Na+and Ca2+overload under metabolic stress as compared to hippocampal astrocytes. The model also predicts that activity-evoked [Na]atransients result in significantly higher ATP consumption in cortical astrocytes than in the hippocampus. The difference in ATP consumption is mainly due to the different expression levels of NMDA receptors in astrocytes in the two brain regions [1]. The model also closely reproduces the dynamics of extra- and intracellular pH under different conditions [2]. Furthermore, in conjunction with experimental data our models also reveal that Na+ concentration varies across the cellular compartments, from one cell to another, and across brain regions.

Discussion
Overall, this study emphasizes the significance of incorporating Na+ homeostasis in computational models for neuro-astrocytic coupling, specifically when studying brain (dys)function under metabolic stress. Our study also highlights that by using different Na+concentrations, how different astrocytes can differentially regulate the function of different neurons or different synapses emanating from the same neuron.



Acknowledgements
This work is supported by the National Institutes of Health through grant number R01NS130916.
References
[1] Thapalia P, Pape N, Rose CR, and Ullah G (2023), Modeling the heterogeneity of sodium and calcium homeostasis between cortical and hippocampal astrocytes and its impact on bioenergetics, Front. Cell. Neurosci., 17, 1035553.

[2]Everaerts K, Thapaliya P, Pape N, Durry S, Eitelmann S, Ullah G, and Rose CR (2023), Inward Operation of Sodium-Bicarbonate Cotransporter 1 Promotes Astrocytic Na+ Loading and Loss of ATP in Mouse Neocortex during Brief Chemical Ischemia, Cells, 12, 2675.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P299: iCSD may produce spurious results in dense electrode arrays
Tuesday July 8, 2025 17:00 - 19:00 CEST
P299 iCSD may produce spurious results in dense electrode arrays

Joseph Tharayil*1,2,Esra Neufeld2, Michael Reimann1,3
1 Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL) Campus Biotech, Geneva, Switzerland
2Foundation for Research on Information Technologies in Society (IT'IS), Zurich, Switzerland
3Open Brain Institute, Lausanne, Switzerland

*Email: tharayiljoe@gmail.com
Introduction

Estimation of the current source density (CSD) is a commonly-used method for processing and interpreting local field potential (LFP) signals by estimating the location of the neural sinks and sources that give rise to the LFP. However, recentin vivoexperiments using dense electrode arrays have found surprising CSD patterns, with high-spatial-frequency oscillations between current sinks and sources [1].

Methods
By analytically computing the contribution of a two-dimensional Gaussian current source centered on an electrode array to the CSD as a function of array density, the width of the current source, and the location of the current source (using the standard CSD method [2]), we show that spurious results mistaking true sources for sinks and vice versa are obtained when the inter-electrode spacing is small relative to the current distribution width (Fig. 1a).
To study the practical relevance of this issue, we simulated LFP recording in a detailed model of rat cortex (200’000 morphologically detailed neurons, (Fig. 1b) [3]). We estimate CSD from these recordings using the inverse CSD (iCSD) [4] method, and, for a variety of electrode densities and CSD estimation parameters, compare the results to the ground-truth current distribution and to the “non-negative” CSD, a metric similar to standard CSD method but which ignores regions where confounding of sources and sinks occur.
Results
With high-density arrays, our model of rat cortex produces the same high-spatial-frequency oscillation between sinks and sources observed in [1] (Fig. 1c.i-iv). As array density increases, the correlation between iCSD and ground-truth current density decreases (Fig. 1c). Modifying iCSD parameters improves the correlation, but the correlation between ground-truth CSD and non-negative CSD is consistently better than the correlation between ground-truth CSD and iCSD.

Discussion
Our results indicate that the high-spatial-frequency oscillations observed inin vivoCSD calculated using high-density electrode arrays are likely due to confusion between sinks and sources. This confusion occurs because the assumption underlying CSD estimation — that current sources are homogeneous over some radius in the plane perpendicular to the electrode array — is not satisfiedin vivo. While more accurately specifying this radius parameter does improve the CSD estimate, there is no value which results in a better correlation between iCSD and ground-truth than the correlation between non-negative CSD and ground truth, suggesting that the true CSD is not homogeneous at any scale.




Figure 1. Fig. 1: a: A positive current source can produce a negative CSD contribution. b: Model of rat cortex (from [3]). c: Comparison of iCSD and objective CSD for various array spacings.
Acknowledgements
References
[1]http://dx.doi.org/10.7554/eLife.97290
[2]https://doi.org/10.1016/0165-0270(88)90056-8
[3]https://doi.org/10.1101/2023.05.17.541168

[4]https://doi.org/10.1016/j.jneumeth.2005.12.005
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P300: The neuron-synapse dynamics control correlational timescales in the developing visual system
Tuesday July 8, 2025 17:00 - 19:00 CEST
P300 The neuron-synapse dynamics control correlational timescales in the developing visual system

Ruben A. Tikidji-Hamburyan1*, Matthew T. Colonnese1
1School of Medicine. and Health. Sciences, the George Washington Univ., Washington, D.C., USA

*Email: rath@gwu.edu


Introduction: During early development, the retinal spontaneous wave-like activity provides positional (spatial) information, encoded in coarse-grained (>100 ms) inter-neuron spike correlations [1], needed for the refinement of retinothalamic, but also for thalamocortical and intracortical connections. The formation of sub- and cortical networks goes in parallel with the refinement of the retinothalamic connections, therefore, spatial information must be transferred by an unrefined, imprecise thalamic network. Thalamocortical relay neurons (TCs) receive 10 to 20 inputs from neighboring ganglion cells at this age [2,3] , which should cause fast (<100ms) timescale correlations in TCs firing. Here, we model how these correlational timescales are regulated.
Methods: TC neurons were simulated as two compartments (dendrosomatic:NaF, KDr, NaP, CaL, CaT, KA, SK, H currents and Ca2+ dynamics and axonal:NaF,KDr) conductance-based model derived from an adult [4]. The parameters were fitted to reproduce the dynamics of mouse TCs recorded at postnatal day 7 (P7) by genetic algorithms with nondominated sorting [5] and with Krayzman’s adaptive multiobjective optimization [6]. The network model consists of 120 TC neurons activated by spikes of retinal ganglion cells (rGCs) recorded ex-vivo at P6-P9. The probability of connections and the synaptic weights are modeled as Gaussian dependence on the distance. Each synapse was modeled by 2-stages: presynaptic depression and postsynaptic NMDAR and AMPAR currents [2,7].
Results: We show that with synaptic convergences observed at P7, either adult neuronal dynamics or synaptic current composition causes fast timescale correlations and a dramatic decrease in spatial information encoded in TC spikes. Therefore, we call them “parasitic” correlations. However, parasitic correlations are suppressed independently of convergence if the model replicates P7 neuronal dynamics and dominance of slow NMDAR currents - the landmark property at this age [3]. Moreover, the interplay between neuron and synaptic dynamics suppresses only parasitic correlation, keeping informative slow timescale correlations intact. In contrast, parasitic correlations are negligible in networks with adult convergence and don’t need to be suppressed.
Discussion: Our results suggest that developing neurons regulate their membrane and synaptic dynamics to preserve information critical for proper circuit formation by suppressing non-informative parasitic correlations. As we showed, parasitic correlations can be invariantly suppressed, while informative correlation passes through an unrefined and imprecise network. Our modeling opens critical general questions: how are correlations transferred, and how does a network regulate correlational timescales? The answers to these questions go beyond just neuronal excitability, as for synchrony transferring [8], and require synergetic regulation of both neuronal and synaptic dynamics.



Acknowledgements
This work was supported by R01EY022730 and R01NS106244
References
● 10.1523/JNEUROSCI.19-09-03580.1999
● 10.1002/cne.22223
● 10.1016/S0896-6273(00)00166-5
● 10.1371/journal.pcbi.1006753
● 10.1109/4235.996017
● 10.7554/eLife.84333
● 10.1523/JNEUROSCI.4276-07.2008
● 10.1016/j.neuron.2013.05.030


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P301: A shared algorithmic bound for human and machine performance on a hard inference task
Tuesday July 8, 2025 17:00 - 19:00 CEST
P301 A shared algorithmic bound for human and machine performance on a hard inference task

Daniele Tirinnanzi*1, Rudy Skerk1, Jean Barbier1,2, Eugenio Piasini1

1International School for Advanced Studies (SISSA), Trieste, Italy
2International Centre for Theoretical Physics, Trieste, Italy

*Email: dtirinna@sissa.it

Introduction

Recently, a successful approach in neuroscience has been to train deep nets (DNs) on tasks that are behaviorally relevant for humans or animals, with the goal of identifying emerging patterns in the implementation of key computations [1, 2], or to formulate compact hypotheses for physiological and perceptual phenomena [3, 4]. However, less attention has been given to the comparison of the limitations on the space of algorithms that are accessible to human cognition and DNs, as a method to generate (rather than test) hypotheses on shared architectural or learning constraints. Here we compare the performance of humans and DNs on the planted clique problem (PCP), a well-studied abstract task with known theoretical performance bounds [5, 6].
Methods
The PCP consists in detecting a set of K interconnected nodes (a “clique”) in a random graph of N nodes. We represent graphs as adjacency matrices and analyze performance across different N values. Four DNs are trained and tested on a binary classification task at 9 N values: a multilayer perceptron (MLP), a convolutional neural network (CNN) and two Visual Transformers [7], one pretrained (ViTpretrained) and one trained from scratch (ViTscratch). Fifteen human subjects perform a two-alternative forced choice task at 2 N values, selecting which of two presented graphs contains the clique. For each N, we measure accuracy for varying K values and fit a sigmoid to extract the clique detection threshold (K₀), used to compare agent performance.
Results
As shown in Figure 1, the CNN exhibits the lowest K₀ (highest clique detection sensitivity) at all N values except N = 200, 300 and 400. At these N values, the CNN performs poorly, making it impossible to estimate K₀. At all N values, the ViTpretrained and ViTscratch perform similarly, while the MLP consistently shows the lowest sensitivity, except at N = 100. Human performance in the task is comparable to that of DNs, with sensitivity at N = 300 closely matching that of the ViTpretrained and ViTscratch. Performance of all agents, both biological and artificial, falls far from the theoretical bounds of the problem.
Discussion
Our results show that different DNs achieve comparable performance in the PCP. This performance level, far from the problem’s theoretical bounds, is also observed in humans, suggesting a shared algorithmic limit between artificial and biological agents. Large-scale human experiments will help further characterize this threshold across all N values.
With its well-defined bounds, the PCP provides a novel framework for investigating the space of algorithms accessible to humans and DNs in simple visual inference tasks. Such interdisciplinary efforts - combining theoretical, computational, and behavioral perspectives - are essential for deepening our understanding of intelligence in both artificial and biological systems [8, 9].



Figure 1. clique detection thresholds (K₀, log-scaled, y axis) as a function of the number of nodes (N, x axis) for humans (pink triangles) and DNs (MLP: red dots; ViTpretrained: dark green dots; ViTscratch: purple dots; CNN: light green dots). The green and the yellow lines indicate the statistical [5] and computational [6] bounds, respectively.
Acknowledgements
The HPC Collaboration Agreement between SISSA and CINECA granted access to the Leonardo cluster. DT is a PhD student enrolled in the National PhD program in Artificial Intelligence, XXXIX cycle, course on Health and life sciences, organized by Università Campus Bio-Medico di Roma.
References
● https://doi.org/10.1038/nn.4244
● https://doi.org/10.48550/arXiv.1803.07770
● https://doi.org/10.1038/s41593-019-0520-2
● https://doi.org/10.1016/j.cub.2022.12.044
● https://doi.org/10.1017/S0305004100053056
● https://doi.org/10.48550/arXiv.1304.7047
● https://doi.org/10.48550/arXiv.2010.11929
● https://doi.org/10.1017/S0140525X16001837
● https://doi.org/10.1038/s41593-018-0210-5


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P302: Is the Cortical dynamics ergodic?
Tuesday July 8, 2025 17:00 - 19:00 CEST
P302 Is the Cortical dynamics ergodic?

Ferdinand Tixidre*1, , Gianluigi Mongillo2,3, Alessandro Torcini1
● Laboratoire de Physique Théorique et Modélisation, CY Cergy Paris Université, Cergy-Pontoise, France
● School of Natural Sciences, Institute for Advanced Study, Princeton, NJ, USA.
● Institut de la Vision, Sorbonne Université, Paris, France.


*Email: ferdinand.tixidre@cyu.fr

Introduction

Cortical neuronsin vivoshow significant temporalvariability in their spike trains even in virtually-identicalexperimental conditions. This variability is partly due to the intrinsic stochasticity of the spike-generation. To accountforthe levels of variabilityobserved, one needs to assume additional fluctuations in the activity over longer timescales [1, 2]. But what is their origin? One theory suggest they result from non-ergodic network dynamics [3] arising from partially-symmetric synaptic connectivity, consistently with anatomical observations [4]. However it is unclear, if such ergodicity breakingoccurs in networks of spiking neurons, due to fast temporal fluctuations in the synaptic inputs[5].


Methods
To address these questions, we study sparsely-connected networks of inhibitory leaky-integrate-and-fire neurons with arbitrary levels of symmetry, q, in the synaptic connectivity. The connectivity matrix ranges from random (q=0) to fully symmetric (q=1). Neurons also receive a constant excitatory drive, balanced by recurrent synaptic inputs. To assess ergodicity, we estimate single-neuron firing rates over increasing time intervals, T, starting from different initial membrane voltage distributions (for the same network). If the dynamics is ergodic, the difference, D, between estimates from different initial conditions should approach zero as 1/T for large T.

Results
This is, in fact, what happens in random networks (i.e., q = 0; Fig. 1(a)). In partially-symmetric networks (q >0), the onset of the "ergodic" regime occurs at longer and longer times. The situation becomes dramatic for the fully symmetric network (q= 1), where D does not decay even for time windows that are 5 order of magnitudes longer than the membrane time constant as shown in Fig 1(a); the network dynamics is non-ergodic, at least in a weak sense. In this regime, the network activity is sparse, with a large fraction of almost-silent neurons, and the auto-covariance function of the spike trains exhibits long time scales (Fig. 1 (b)). Both these features are also routinely observed in experimental recordings [6,7]

Discussion
Taken together, our results provide support to the idea that many features of cortical activity can be parsimoniously explained by the non-ergodicity of the network dynamics. In particular, in this regime, the activity level of the single neurons can significantly change depending on the “microscopic" initial conditions (which are beyond experimental control) (Fig. 1 (c-d)), providing a simple explanation for the large trial-to-trial fluctuations observed in the experiments.





Figure 1. (a) D as a function of time for different values of q: q=0 (blue); q=0.5 (green); q=0.8 (orange); q=0.9 (red); q=0.95 (brown); q=1.0 (black). (b) Auto-correlation of synaptic currents for different q. (c-d) Cumulative firing rate of a single neuron for q=0.8 (c) and q=0.9 (d). Shades of the main color represent different replicas. The insets show the instantaneous firing rate of the same neuron.
Acknowledgements
F.T. and A.T. received financial support by the Labex MME-DII (Grant No. ANR-11-LBX-0023-01) and by CY Generations
(Grant No ANR-21-EXES-0008). G.M. work is supported by grants ANR-19-CE16-0024-01 and ANR-20-CE16-0011-02 from the French National Research Agency and by a grant from the Simons Foundation (891851, G.M.).


References
● https://doi.org/10.1016/j.neuron.2010.12.037
● https://doi.org/10.1167/18.8.8
● https://www.biorxiv.org/cgi/content/short/2022.03.14.484348
● https://doi.org/10.1126/science.abj5861
● https://doi.org/10.1038/s41598-019-40183-8
● https://doi.org/10.1007/s00359-006-0117-6
● https://doi.org/10.1038/nn.3862


Speakers
avatar for Alessandro TORCINI

Alessandro TORCINI

Professor, CY Cergy Paris Universite'
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P303: Sleep-like Homeostatic and Associative Intra- and Inter-Areal Plasticity Enhances Cognitive and Energetic Performance in Hierarchical Spiking Network
Tuesday July 8, 2025 17:00 - 19:00 CEST
P303 Sleep-like Homeostatic and Associative Intra- and Inter-Areal Plasticity Enhances Cognitive and Energetic Performance in Hierarchical Spiking Network

Leonardo Tonielli1, Cosimo Lupo1, Elena Pastorelli1, Giulia De Bonis1, Francesco Simula1, Alessandro Lonardo1, Pier Stanislao Paolucci1

1Istituto Nazionale di Fisica Nucleare, Sezione di Roma
Introduction

Can hierarchical bio-inspired AI spiking networksand biological brainsengaged in incremental learning benefit from unsupervised plasticity during an offline deep-sleep-like period? We show that simultaneous intra- and inter-areal plasticity enhances the cognitive and energetic benefitsofdeepsleep-likeinathalamo-cortical modelinspired bythecorticalorganizing principle[1]andthehomeostatic-associative sleephypothesisas in[2, 3],that learns,retrievesand classifies handwritten digits from few examples. This outperformsresultspresented in[4], where deep sleep is limited to cortico-cortical plasticity.
Methods
The network is a two-areaspiking model (Fig1A) using Integrate-and-fire neurons with spike-frequency adaptation. Each layer is composed of excitatory-inhibitory populations. The input consists of MNIST images preprocessed with a HOG filter[5](30 training, 250 test). The perceptual stream is released from the thalamusand propagates through plastic feedforward connections to cortex, which encodes memories within neural assemblies elicited by specific contextual stimuli. Sleep-like dynamics is stimulated by non-specific cortical noise generating slow-oscillation activity that promotes memoryreplayand thusconsolidateslearning through homeostatic and associative processes within cortical synapses and thethalamo-cortical loop.
Results
We assessed the cognitive and energetic performance of the network by measuring the most-active neuron’sclassification accuracy (Fig1B),thenetwork’s mean firing rate (C) and the synaptic change (C, D) over 2000 seconds of sleep. We comparedthalamo-cortical plastic sleep with cortico-cortical plasticityonly. Our findingsindicatethat fullthalamo-cortical plasticity strongly enhances classification performance (B) and firing rate downscaling (C) while preserving the same associative-homeostaticbehaviourat the cortico-cortical synaptic level (D). Specifically, weobserveda significant 5% improvement in classification accuracy and a 25% reduction in firing rate, enabling the network to classify better by consuming less energy.
Discussion
We proposed a minimalthalamo-cortical model that classifies images drawn from the MNIST set of handwritten digits,capable of improving cognitive performance by homeostatic-associative cortical plastic deep-sleep-like activity. While cortical sleep is important to normalize high level representations and to develop new synapses, our new results suggest thatthalamo-cortical sleep is fundamental to coordinate cortical activation and to regulateits waking activity. This effect might be beneficial also to deep neural networkalgorithms which lack this generalizationfeature,andit’salsorelevant for cerebral neural networks.



Figure 1. Solid lines: full plasticity, dotted: cortico-cortical only. Deep-sleep after training with 3 examples / digit class (A) Network’s structure. (B) Classification from most active neuron (C) Mean firing rate during classification and overall synaptic change. (D) cortico-cortical synaptic change: synapses encoding assemblies (blue), same class (yellow) different class (red). 100 configurations.
Acknowledgements
Workcofundedbythe European Next Generation EUgrants,ItaliangrantsCUP I53C22001400006 (FAIR PE0000013 PNRR) and CUP B51E22000150006 (EBRAINS-Italy IR00011 PNRR).APE parallel/distributed lab at INFN Roma,BRAINSTAIN.
LeonardoTonielliis a PhD studentofthe National PhD program in Artificial IntelligenceXLcycle Healthand life sciences, organized by Università Campus Bio-Medico di Roma.

References
1. https://doi.org/10.1016/j.tins.2012.11.006
2.https://doi.org/10.1016/j.neuron.2013.12.025
3.https://doi.org/10.1016/j.neuron.2016.03.036
4.https://doi.org/10.1038/s41598-019-45525-0
5. doi:10.1109/CVPR.2005.177

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P304: Impulsivity Enhances Random Exploration and Boosts Performance in Loss but not Gain Contexts
Tuesday July 8, 2025 17:00 - 19:00 CEST
P304 Impulsivity Enhances Random Exploration and Boosts Performance in Loss but not Gain Contexts

Lingyu Meng1, Alekhya Mandali1,2,Hazem Toutounji*1,2,3

1School of Psychology, University of Sheffield, Sheffield, UK
2The Neuroscience Institute, University of Sheffield, Sheffield, UK
3Insigneo Institute for in silico Medicine, University of Sheffield, Sheffield, UK


*Email: h.toutounji@sheffield.ac.uk

Introduction
People often encounter decisions that may lead to gains or losses. As they learn the value of available choices, whether positive in the case of gains and rewards, or negative in the case of losses, people need also to balance between gathering information (exploration) and capitalising on their current knowledge (exploitation). Exploration itself can either be random or directed towards reducing uncertainty[1]. While psychiatric traits like impulsivity are known to influence exploration[2], there is no account on how this influence relates to the learning context such as gain or loss. This study investigates how impulsivity modulates different exploration strategies and decision performance in a context-dependent manner.

Methods
Human participants (N = 115) completed a two-armed bandit task where in different rounds they can win or lose points. Each arm delivered or cost either a fixed or a variable (uncertain) number of points. Learning and exploration behaviour was modelled using reinforcement learning. Crucially, uncertainty in each trial was incorporated into the model using the Kalman filter for the learning process and a hybrid choice model with three components[1]: value-dependent random exploration, and uncertainty-dependent random and directed exploration. Impulsivity was measured using the UPPS-P Impulsive Behaviour Scale[3]. A general linear mixed model quantified the interaction between impulsivity, exploration strategies, and context.

Results
Participants engaged in significantly more value-dependent random exploration and less uncertainty-dependent random exploration in the loss context compared to the win context. However, impulsive individuals showed the opposite trend, relying significantly more on uncertainty-dependent random exploration in the loss context. Impulsivity was also positively linked to task performance in loss contexts, suggesting that impulsive individuals adaptively leveraged random exploration to manage uncertainty. In other words, impulsive individuals engaged in more uncertainty-dependent random exploration, especially when facing losses, and benefited from this strategy.

Discussion

Our findings highlight the adaptive role of impulsivity in uncertain environments, particularly when leading to losses. Impulsive individuals appear to be more sensitive to total uncertainty, effectively using random exploration to improve performance. These results contrast with prior studies that emphasise the maladaptive nature of impulsivity, suggesting instead its potential benefits in high-stakes loss contexts. Our findings contradict prospect theory[4], showing more risk aversion to losses than gains. Further, this win-loss asymmetry is amplified in impulsive individuals, highlighting the importance of taking individual traits into account when developing theories of human learning and decision making.



Acknowledgements
This work was funded by the University of Sheffield.
References
1. doi: 10.1016/j.cognition.2017.12.014
2. doi: 10.1038/s41467-022-31918-9
3. doi: 10.3389/fpsyt.2019.00139

4.doi: 10.2307/1914185
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P305: Developing a Digital Twin of the Drosophila Optical Lobe: A Large-Scale Autoencoder Trained on Natural Visual Inputs Using Complete Connectome Data
Tuesday July 8, 2025 17:00 - 19:00 CEST
P305 Developing a Digital Twin of the Drosophila Optical Lobe: A Large-Scale Autoencoder Trained on Natural Visual Inputs Using Complete Connectome Data

Keisuke Toyoda*1, Naoya Nishiura1, Masataka Watanabe1

1The University of Tokyo, Tokyo, Japan

*Email: toyoda-keisuke527@g.ecc.u-tokyo.ac.jp

Introduction

The optic lobe is the main visual system of Drosophila, involved in functions like motion detection [2]. Recent advances in connectome projects have provided near-complete synaptic maps [1,3,8], enabling detailed circuit analyses. A recent study trained a connectome based neural network to reproduce motion detection properties of neurons T4 and T5, assuming vector teaching signals like optical flow, which are absent in biological circuitry. In this study, we use the right optic lobe’s connectivity from FlyWire [5,8] to build a large-scale autoencoder, where the visual input itself serves as the teaching signal [6]. In doing so, we aim to develop a digital twin of the drosophila optical lobe under biologically plausible training conditions.

Methods
We derived a synaptic adjacency matrix from the entire right optic lobe, yielding about 45,000 nodes and over 4.5 million edges [5]. Photoreceptors (R1–R6) served as both input and output in an autoencoder that preserves feedforward and feedback connections [6]. We trained it with natural video stimuli, adjusting synaptic weights to minimize reconstruction error between initial and reconstructed signals. Each iteration also incorporated slight temporal offsets to assess predictive capacity. Neuronal activity was then analyzed by topological distance from the photoreceptors, allowing us to track signal propagation through deeper layers [2].
Results
After training, the autoencoder accurately reconstructed photoreceptor inputs, achieving low mean squared error across varied visual contexts. Neurons beyond superficial lamina layers showed moderate activity, implying that deeper circuits were engaged, though not intensely. Under prolonged stimulation, activation patterns stabilized, suggesting recurrent loops that dampen fluctuations. These results align with reports that feedback modulates photoreceptors to maintain sensitivity [6]. Performance analyses indicated that minor temporal offsets improved predictive accuracy, hinting that the network captures short-term correlations in visual input.
Discussion
Our findings show that a connectome-based autoencoder, using the entire right optic lobe, can reconstruct visual inputs while incorporating known feedback loops. By preserving anatomical wiring [5,8], the model reveals how structural constraints inform function. Compared to approaches that highlight local motion detection [4] or rely on supervised learning [3], our unsupervised method uncovers emergent coding without explicit tasks. Although deep-layer neurons were only moderately active, their engagement suggests hierarchical processing aids reconstruction [2]. Future studies could dissect subnetworks for contrast gain or motion detection to clarify how feedback refines perception [1,6].



Acknowledgements
This work has been supported by the Mohammed bin Salman Center for Future Science and Technology for Saudi-Japan Vision 2030 at The University of Tokyo (MbSC2030) and JSPS KAKENHI Grant Number 23K25257.
References
● https://doi.org/10.7554/eLife.57443
● https://doi.org/10.1146/annurev-neuro-080422-111929
● https://doi.org/10.1038/s41586-024-07939-3
● https://doi.org/10.1016/j.cub.2015.07.014
● https://doi.org/10.1038/s41592-021-01330-0
● https://doi.org/10.1371/journal.pbio.1002115
● https://doi.org/10.1007/s00359-019-01375-9
● https://doi.org/10.1038/s41586-024-07558-y


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P306: Computational investigation of wave propagation in a desynchronized network
Tuesday July 8, 2025 17:00 - 19:00 CEST
P306 Computational investigation of wave propagation in a desynchronized network

Lluc Tresserras Pujadas*1, Leonardo Dalla Porta1, Maria V. Sanchez-Vives1,2
1Systems Neuroscience, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
2ICREA, Passeig Lluís Companys, Barcelona, Spain


*Email: tresserrasi@recerca.clinic.cat

Introduction

The cerebral cortex exhibits a rich repertoire of spontaneous spatiotemporal patterns of activity that strongly depend on the dynamical regime of the brain. Specifically, its dynamics can range from highly synchronized states (e.g., slow wave sleep), characterized by the presence of slow oscillations (SO), to more asynchronous patterns (e.g., awake states). However, under certain specific conditions, slow waves can spontaneously emerge and propagate within awake cortical network such as in cases of sleep deprivation [1], lapses of attention [2], or brain lesions [3]. Although recent studies have described this phenomenon, the mechanisms facilitating slow wave percolation on desynchronized cortical areas remain poorly understood.


Methods
To investigate this question, we employed a biophysical realistic two-dimensional computational model simulating desynchronized activity characteristic of awake states [4]. By inducing slow oscillations in a localized cortical area, we investigated how slow waves percolate into neighboring awake regions. Specifically, we examined how changes in excitatory/inhibitory balance and structural connectivity of the network can enhance or reducethe percolation of slow waves into desynchronized areas. To quantify slow wave propagation in the desynchronized network, we analyzed evoked network activity using different percolation metrics, such as the range of activation and shared information across the network.

Results
Our results indicate that increasing the proportion of long-range postsynaptic connections in excitatory neurons enhances global synchronization, facilitating the propagation of SO activity into desynchronized regions. We also examined the impact of inhibition on slow wave propagation by modulating the excitatory/inhibitory balance in the SO activity region of the network. Reducing inhibition increasedcortical excitabilityand local synchronization within the SO region, thereby enhancing the spread of slow oscillations within the desynchronized network.

Discussion
In summary, we showed that increasing the proportion of long-range excitatory connections enhances global synchronization, while reducing inhibition promotes local synchronization and neuronal excitability, both facilitating the spread of slow oscillations into desynchronized areas. These findings are further supported with the use of different percolation metrics reinforcing the idea that structural and functional properties of the network play a crucial role in determining cortical vulnerability to slow wave percolation. Together, our results are a first step in mechanistically understanding the dynamical changes that occur in the lesioned brain and their underlying mechanisms, offering a path to the development of future therapeutic strategies for neurologic disorder.





Acknowledgements
Funded by PID2020-112947RB-I00 financed by MCIN/ AEI /10.13039/501100011033 and by European Union (ERC, NEMESIS, project number 101071900) to MVSV and PRE2021-101156 financed by the Spanish Ministry of Science and Innovation.
References
[1]Vyazovskiy, V. V., et al. (2011).Local sleep in awake rats.Nature,472, 443-447.
[2]Andrillon, T., et al. (2021). Predicting lapses of attention with sleep-like slow waves.Nat Commun,12, 3657.
[3]Massimini, M., et al. (2024). Sleep-like cortical dynamics during wakefulness and their network effects following brain injury.Nat Commun,15, 7207.
[4]Barbero-Castillo, A., et al.(2021). Impact of GABAAand GABABinhibition on cortical dynamics and perturbational complexity during synchronous and desynchronized states.J Neurosci,41, 5029-5044.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P307: A Computational Model to Study Effects of Hidden Hearing Loss in Noisy Environments
Tuesday July 8, 2025 17:00 - 19:00 CEST
P307 A Computational Model to Study Effects of Hidden Hearing Loss in Noisy Environments

Siddhant Tripathy*1, Maral Budak2, Ross Maddox3, Gabriel Corfas3, Michael T. Roberts3, Anahita H. Mehta3, Victoria Booth4, Michal Zochowski1,5

1Department of Physics, University of Michigan, Ann Arbor, USA
2Department of Microbiology and Immunology, University of Michigan, Ann Arbor, USA
3Kresge Hearing Research Institute, University of Michigan, Ann Arbor, USA
4Department of Mathematics, University of Michigan, Ann Arbor, USA
5Biophysics Program, University of Michigan, Ann Arbor, USA

*Email: tripps@umich.edu

Introduction

Hidden Hearing Loss (HHL) is an auditory neuropathy leading to reduced speech intelligibility in noisy environments despite normal audiometric thresholds. One of the leading hypotheses for such degraded performance is myelinopathy, a permanent disruption in the myelination patterns of type 1 Spiral Ganglion Neuron (SGN) fibers [1,2]. Previous studies on location discriminability in the Medial Superior Olive (MSO) cells in the left and right hemispheres as a function of the interaural time difference (ITD), have shown that myelinopathy leads to signatures of HHL [3]. However, the effects of noise on location discriminability is unknown.
Methods
To investigate these effects, we developed a physiologically based model that incorporates SGN fiber activity to sound stimuli processed through a peripheral auditory system model [4]. To simulate myelinopathy, we introduced random variations in the position of myelination heminodes, which generates phase shifts in the spike timing of affected fibers. To test the subsequent effects on sound localization, we constructed a network model that simulates the propagation of SGN responses to cochlear nuclei and the MSO populations. We varied the location of the sound impulse by introducing a phase shift in the input in one ear relative to the other, with background noise signals kept stationary.
Results
Upon adding noise to the sound stimuli, we find that spikes in a given SGN fiber's spike train are shifted inhomogeneously, leading to a reduction in phase locking of single fibers to sound. The effects of myelinopathy on population behavior are thus more pronounced in the presence of noise. Subsequently in the localization network, we find that the sensitivity to ITD is reduced in myelinopathy conditions, and that this effect is significantly exacerbated when we introduce noisy background stimuli, a signature of HHL.
Discussion
We find that noisy environments exacerbate HHL symptoms. This model may be useful in understanding the downstream impacts of SGN neuropathies.




Acknowledgements
This research was supported in part by National Institute of Health grant: NIH MH135565 (MZ and ST) and R01DC000188 (GC).R01DC000188
References
● https://doi.org/10.1038/ncomms14487
● Budak, M., Grosh, K., Sasmal, A., Corfas, G., Zochowski, M., and Booth, V. (2021). Contrasting mechanisms for hidden hearing loss: Synaptopathy vs myelin defects. PLoS Comput. Biol. 17:e1008499. doi: 10.1371/journal.pcbi.1008499
● Budak, M., Roberts, M. T., Grosh, K., Corfas, G., Booth, V. and Zochowski, M. (2022). Binaural Processing Deficits Due to Synaptopathy and Myelin Defects. Front. Neural Circuits 16:856926. doi: 10.3389/fncir.2022.856926
● https://doi.org/10.1121/1.1453451PMID: 12051437.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P308: Brain-Inspired Recurrent Neural Network Featuring Dendrites for Efficient and Accurate Learning in Classification Tasks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P308 Brain-Inspired Recurrent Neural Network Featuring Dendrites for Efficient and Accurate Learning in Classification Tasks

Eirini Troullinou*1,2, Spyridon Chavlis1, Panayiota Poirazi1

1Institute of Molecular Biology and Biotechnology, Foundation for Research, and Technology-Hellas, Heraklion, Greece
2Institute of Computer Science, Foundation for Research, and Technology-Hellas, Heraklion, Greece

*Email: eirini_troullinou@imbb.forth.gr

Introduction

Artificial neural networks (ANNs) have achieved substantial advancements in addressing complex tasks across diverse domains, including image recognition and natural language processing. These networks rely on a large number of parameters to attain high performance; however, as the complexity of ANNs increases, the challenge of training them efficiently also escalates [1]. In contrast, the biological brain, which has served as a fundamental inspiration for ANN architectures [2], exhibits remarkable computational efficiency by processing vast amounts of information with minimal energy consumption [3]. Moreover, biological neural networks demonstrate robust generalization capabilities, often achieving effective learning with limited training samples, a phenomenon known as few-shot learning.

Methods
In an effort to develop a more biologically plausible computational model, we propose a sparse, brain-inspired recurrent neural network (RNN) that incorporates biologically motivated connectivity principles. This approach is driven by the computational advantages of dendritic processing [4], which have been extensively studied in biological neural networks. Specifically, our model enforces structured connectivity constraints that emulate the physical relationships between dendrites, neuronal somata, and inter-neuronal connections. These biologically inspired connectivity rules are implemented via structured binary masking, thereby regulating the network's architecture based on empirical neurophysiological observations.

Results
To assess the efficacy of the proposed model, we conducted a series of experiments on benchmark image and time-series datasets. The results indicate that our brain-inspired RNN attains the highest accuracy achieved by a conventional (vanilla) RNN while utilizing fewer trainable parameters. Furthermore, when the number of trainable parameters is increased, our model surpasses the peak performance of the vanilla RNN by a margin of 3–20%, depending on the dataset. In contrast, the conventional RNN exhibits overfitting tendencies, leading to significant performance degradation.

Discussion
In summary, we present a biologically inspired RNN architecture that incorporates dendritic processing and sparse connectivity constraints. Our findings demonstrate that the proposed model outperforms traditional RNNs in both image and time-series classification tasks. Additionally, the model achieves competitive performance with fewer parameters, highlighting the potential role of dendritic computations in machine learning. These results align with experimental evidence suggesting the critical contribution of dendrites to efficient neural processing, thereby offering a promising direction for future ANN development.



AcknowledgementsThis work was supported by the NIH (GA: 1R01MH124867-04), the TITAN ERA Chair project under Contract 101086741 within the Horizon Europe Framework Program of the European Commission, and the Stavros Niarchos Foundation and the Hellenic Foundation for Research and Innovation under the 5th Call of Science and Society "Action Always strive for excellence – Theodoros Papazoglou" (DENDROLEAP 28056).
References
[1] Abdolrasol, M. G, et al. (2021). Artificial neural networks based optimization techniques: A review. Electronics, 10(21), 2689.
[2] Sejnowski, T. J. (2020). The unreasonable effectiveness of deep learning in artificial intelligence. PNAS, 117(48), 30033-38.
[3] Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab, 21(10), 1133-1145.
[4] Poirazi, P., & Papoutsi, A. (2020). Illuminating dendritic function with computational models. Nat Rev Neurosci, 21(6), 303-21.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P309: Macaque retina simulator
Tuesday July 8, 2025 17:00 - 19:00 CEST
P309 Macaque retina simulator


Simo Vanni*1, Henri Hokkanen2

1Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland
2Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland

*Email:simo.vanni@helsinki.fi


Introduction


We have been building a phenomenological macaque retina simulator with the aim of providing biologically plausible spike trains for downstream visual cortex simulations. Containing a wide array of biologically relevant information is the key to having an accurate starting point for building the next step in the visual processing cascade.The primate retina dissects visual scenes into three major high-resolution retinocortical streams. The most numerous retinal ganglion cell (RGC) types, midget and parasol cells, are further divided into ON and OFF subtypes. These four RGC populations have well-known anatomical and physiological asymmetries, which are reflected in the spike trains received by downstream circuits. Computational models of the visual cortex, however, rarely take these asymmetries into account.


Methods

We collected published data on ganglion cell densities[1]and dendritic diameters[2, 3]as a function of eccentricity for parasol and midget ON & OFF types. Spatial receptive fields were modelled as a elliptical difference-of-Gaussians model or a spatially detailed variational autoencoder model, based on spatiotemporal receptive field data[4, 5]. The three temporal receptive field models include linear temporal filter, dynamic contrast gain control[6–8]and a subunit model accounting for both center subunit[9]and surround[10]nonlinearity and fast cone adaptation[11]. Finally, we included cone noise to all three temporal models, quantified by[12], to account for correlated background firing in distinct ganglion cell types[13].
Results

Figure 1 A and B show how synthetic receptive fields are arranged into a two-dimensional array. The temporal impulse response (C) for the dynamic gain control model has kernel dynamics varying with contrast. Parasol and midget unit responses for temporal frequency and contrast show expected behavior, with parasol sensitivity peaking at a higher temporal frequency and showing compressive non-linearity with increasing contrast (D, F). Dynamical temporal model responses to full-field luminance onset show expected onset and offset dynamics (F, G). The drifting sinusoidal grating at 4Hz evokes oscillatory response at the stimulus frequency (H).


Discussion

Our retina model can be adjusted for varying cone noise and unit gain (firing rate) levels and allows mp4 videos as stimulus input. The software is programmed in Python and supports GPU acceleration. Moreover, we have strived for modular code design to support future development.
Our model has multiple limitations. It is monocular and accounts for temporal hemifield only. It assumes stable luminance adaptation state and does not consider chromatic input or eye movements. Optical aberration is implemented with fixed spatial filter.
Despite these limitations, we believe it provides a physiologically meaningful basis for simulations of the primate visual cascade.





Figure 1. Fig 1. A) Synthetic parasol ON receptive fields (RFs). B) RF repulsion equalizes coverage. C) Linear fixed and nonlinear contrast gain control model temporal impulse responses. D, F) Parasol and midget unit responses for temporal frequency and contrast. E) Responses for varying contrasts. G) Responses for luminance onset and offset. H) Responses for drifting sinusoidal grating.
Acknowledgements

We thank Petri Ala-Laurila for insightful comment on model construction. This work was supported by Academy of Finland grant N:o 361816.


References

[1]https://doi.org/10.1038/341643a0
[2]https://doi.org/10.1016/0306-4522(84)90006-X
[3]https://doi.org/10.1002/cne.902890308
[4]https://doi.org/10.1080/713663221
[5]https://doi.org/10.1038/nature09424
[6]https://doi.org/10.1017/S0952523800008853
[7]https://doi.org/10.1017/S0952523899162151
[8]https://doi.org/10.1113/jphysiol.1987.sp016531
[9]https://doi.org/10.1016/j.neuron.2016.05.006
[10] https://doi.org/10.7554/eLife.38841
[11] https://doi.org/10.1523/JNEUROSCI.0793-21.2021
[12] https://doi.org/10.1038/nn.3534
[13] https://doi.org/10.1038/nn.2927



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P310: Feedback input to apical dendrites of L5 pyramidal cells leads to a shift towards a resonance state in V1 cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P310 Feedback input to apical dendrites of L5 pyramidal cells leads to a shift towards a resonance state in V1 cortex

Francescangelo Vedele*1, Margaux Calice2, Simo Vanni1

1Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland
2Centre Giovanni Borelli - CNRS UMR 9010, Université Paris Cité, France

*Email: francescangelo.vedele@helsinki.fi
Introduction:To make sense of the abundance of visual information coming in from the outside world, cortical and subcortical structures operate on stored models of the environment that are constantly compared with new information[1]. The cortical structures for vision are tightly interconnected and rely on multiple subregions to capture different facets of information. The SMART model by Grossberg and Versace[2]aims to build a simulation framework to provide a circuit-level perspective on learning, expectation, and processing of visual information in the brain. While cellular details are well understood at the microscopic level, computations linking visual system states to higher-order processes are scarce.



Methods:The macaque was chosen as a biological model because of its close evolutionary relationship to humans[3]. Computer simulations of macaque cortical patches were implemented using CxSystem2[4,5], a cortical simulation framework based on Brian2[6]. The SMART model includes cells in V1 (layers L2/3, L4e, L5, and L6), dendrites of compartmental neurons reaching L1, and thalamic specific, nonspecific, and reticular nuclei. Simulations were run for a duration of 2 seconds. Spike times and cell membrane voltage were monitored. Power spectral density (PSD) spectra of membrane voltage were obtained using Welch’s method. A feedback current of 1.5x or 2.5x the rheobase was injected into the apical dendrite of L5 pyramidal cells (located in L1).

Results:The SMART model was first simulated with bottom-up sensory input and a weak feedback current. In this state, all layers output in short (~150ms) bursts followed by longer periods of oscillatory activity (~500ms). The PSD plots show a broad, low-frequency peak in the alpha/beta frequency bands (up to 30Hz) across layers. Upon injection of a stronger feedback current, the model shifts to a resonance mode characterized by higher firing rates and a broad PSD peak in the gamma range (20-70 Hz) across layers. Therefore, strong feedback input shifts the state of the system, from resting to a high-frequency resonance mode. This might be related to population synchrony, which may bind features in different parts of the visual field[7].

Discussion:The SMART model provides a flexible way to model cortical coordination and feedback. Our simulations show how a weak input from higher cortical areas leaves the system in a disengaged state, akin to a mismatch between expectation and reality. By injecting a strong current to mimic feedback from higher cortical areas, the simulated system enters a resonant state as in the biological brain, establishing a condition that supports learning and plasticity. While this model is informative when studying single-region cortical dynamics, we plan to integrate V2 and V5 with the current model of V1, aiming to simulate hierarchical cortical processing of visual information.






Acknowledgements
This work was supported by Academy of Finland project grant 361816.
References

[1]https://doi.org/10.1038/nrn2787
[2]https://doi.org/10.1016/j.brainres.2008.04.024
[3]https://doi.org/10.1093/cercor/bhz322
[4]https://doi.org/10.1162/neco_a_01120
[5]https://doi.org/10.1162/neco_a_01188
[6]https://doi.org/10.7554/eLife.47314
[7]https://doi.org/10.1038/338334a0
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P311: Adjustment of Vesicle Equation in the Modified Stochastic Synaptic Model to Correct Unexpected Behaviour in Frequency Response of Synaptic Efficacy
Tuesday July 8, 2025 17:00 - 19:00 CEST
P311 Adjustment of Vesicle Equation in the Modified Stochastic Synaptic Model to Correct Unexpected Behaviour in Frequency Response of Synaptic Efficacy

Ferney Beltran-Velandia*1,2,3, Nico Scherf2,3, Martin Bogdan1,2


1Neuromorphic Information Processing department, Leipzig University, Leipzig, Germany
2Center for Scalable Data Analytics and Artificial Intelligence ScaDS.AI, Dresden/Leipzig, Humboltstrasse 25, Leipzig, Germany
3Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, Leipzig, Germany

*Email:beltran@informatik.uni-leipzig.de

Introduction
Synaptic Dynamics (SD) describes the plasticity properties of synapses in the timescale of milliseconds. Among different SD models, The Modified Stochastic Synaptic Model (MSSM) is a biophysical one that can simulate the SD mechanisms of facilitation and depression [1]. Further analysis of the parameters found in [2] points at an unexpected behaviour in the frequency response of the MSSM. This behaviour is also studied in the time-domain, which points to an adjustment in the dynamics of the vesicle release. This correction leads to a version of the MSSM without the unexpected behaviour, balancing better the equations and allowing to find new sets of parameters to simulate examples of facilitation and depression.

Methods
The MSSM represents the dynamics of synapses by modelling the dynamics of Calcium, Vesicles release, Probability of release, Neurotransmitters buffering and postsynaptic contribution with differential equations and 10 parameters. In previous work [2], a pipeline was used to tune the parameters of the MSSM when simulating two types of synapses: pyramidal to interneuron (facilitation) and Calyx of Held (depression). The parameters are analysed using the frequency response of the synaptic efficacy, ranging from 1-100Hz [3]. The unexpected behaviour is defined as the frequency from where a discontinuity appears. Further analysis in the time-domain allows to propose the adjusted MSSM, which corrects this behaviour and balances its equations.

Results
Applying the frequency response analysis to the parameters for the studied SD mechanisms shows that some responses exhibit the unexpected behaviour (Fig. 1a-b). This behaviour is associated in the time-domain with the increment of neurotransmitters released even though the number of vesicles is in its steady-state (Fig. 1c). An adjustment on the equation of Vesicles release corrects this behaviour by making the input contribution dependent on the current number of vesicles release (Fig. 1d). To validate our approach, the pipeline is run for the adjusted MSSM, finding 6000 new sets of parameters for both SD mechanisms. The frequency response for the new parameters is depicted in Fig. 1e-f, showing the expected behaviour.

Discussion
The adjustment of the MSSM not only corrects the unexpected behaviour in the frequency- and time-domains but also balances the equation of Vesicle release: In the original model, the probability of release had the same units as the vesicles. With the adjustment, the probability of release recovers its dimensionless nature. The new distributions of parameters show that some parameters have more influence to distinguish between facilitation and depression, especially the ones associated to Probability of release and Neurotransmitters buffering. Finally, this work represents a step forward to the integration of the MSSM into Spiking Neural Networks, enhancing their computationalcapabilitieswith the properties of Synaptic Dynamics.





Figure 1. Figure 1. Unexpected behaviour of the MSSM: a-b) frequency responses with unexpected behaviour. In red, an example of the discontinuity of the efficacy. c) Time response: N(t) increase even if V(t) is in steady-state, causing the unexpected behaviour. d) Time response of the adjusted MSSM showing the correction. e-f) Frequency response of new parameters with the unexpected behaviour corrected.
Acknowledgements
I want to thank the team of the Neuromorphic Information Processing Group, specially Patrick Schoefer and Dominik Krenzer for all the fruitful discussions.This work was partially funded by the German Federal Ministry of Education and Research (BMBF) within the project (ScaDS.AI) Dresden/Leipzig (BMBF grant 01IS18026B).
References
[1] El-Laithy, K. (2012). Towards a brain-inspired information processing system: Modelling and analysis of synaptic dynamics. LAP Lambert Academic Publishing.
[2] Beltran, F., Scherf, N., & Bogdan, M. (2025). A pipeline based on differential evolution for tuning parameters of synaptic dynamics models. (To appear in Proceedings of the 33rd ESANN)
[3] Markram, H., Wang, Y., & Tsodyks, M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences, 95 (9), 5323-5328. doi: 10.1073/pnas.95.9.5323
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P312: Modeling Effects of Norepinephrine on Respiratory Neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P312 Modeling Effects of Norepinephrine on Respiratory Neurons

Sreshta Venkatakrishnan*1, Andrew Kieran Tryba2, Alfredo J. Garcia, 3rd3, and Yangyang Wang1

1Department of Mathematics, Brandeis University, Waltham, MA, USA
2Department of Pediatrics, Section of Neurology, The University of Chicago, Chicago, IL, USA
3Institute for Integrative Physiology, The University of Chicago, Chicago, IL, USA

*Email: sreshtav@brandeis.edu


Introduction
The preBötzinger complex (pBC) within the mammalian brainstem,comprised ofintrinsically bursting and spiking neurons,generates the neural rhythm that drives the inspiratory phase of respiration. Norepinephrine (NE), a neuromodulator, differentially modulates synaptically isolated pBC neurons [1]. In cadmium (Cd)-insensitive N-bursting neurons, NE stimulates burst frequency without affecting burst duration. In Cd-sensitive C-bursting neurons, NE increases duration while minimally affecting frequency. NE also induces conditional bursts in tonic spiking neurons, while silent neurons remain inactivein the presenceof NE. In this work, we propose a novel mechanism to simulate the effects of NE in single pBC neurons.

Methods
The pBC neuron model we consider is a single-compartment dynamical system with Hodgkin-Huxley-style conductances, incorporating membrane potential and calcium dynamics, adapted from previous works [2,3,4]. Of particular interest to us amongst the ionic currents incorporated in this model are two candidate burst-generating currents: Cd-insensitive persistent sodium current (INaP) and Cd-sensitive calcium-activated nonspecific cationic current (ICAN). Building on previous efforts for modeling NE via modulating ICAN[2,3] and from experimental evidence in [5], we propose that NE application in the model also leads to an increase in the flux of [Ca2+] between the cytosol and the ER, modeled via inositol-triphosphate, IP3.
Results
The most important finding of this study is the identification of potential mechanisms underlying the NE-mediated induction of tonic spiking neurons to CAN-dependent bursting. Our model predicts that this conditional bursting requires an increase in both IP3and ICAN. This mechanism also induces an increase in N-burst frequency and C-burst duration, while N-burst duration remains unaltered. While modulatingICANincreases C-burst frequency in our model, the opposing effect brought bymodulating IP3effectivelycounters this increase and maintains frequency. Furthermore, we also identify discrete parameter regimes where silent neurons continue to remain inactive in NE. These results are consistent with [1].
Discussion
Conditional bursting has been previously described in rhythmic networks; however, the underlying mechanisms are often unknown. Our model predicts a new mechanism involving NE-signaling, elevating both IP3and ICANin a subset of pBC neurons. These predictions need to be experimentally tested by blocking either IP3or ICAN, and testing whether subsequent NE modulation can no longer recruit this subset of pBC neurons to burst. Moreover, while our model predictions for bursting neurons mostly agree with the experiments in [1], we also notice some discrepancies with respect to burst frequency and duration. Further investigation is required to analyze and understand these disparities.






Acknowledgements
This work has been supported byNIH R01DA057767:CRCNS: Evidence-based modeling of neuromodulatory action on network properties,granted to Yangyang Wang (PI)andAlfredo Garcia at UChicago.
References
[1]https://doi.org/10.1152/jn.01308.2005
[2]https://doi.org/10.1007/s10827-010-0274-z
[3]https://doi.org/10.1007/s10827-012-0425-5
[4]https://doi.org/10.1063/1.5138993
[5]https://doi.org/10.1152/ajpendo.1985.248.6.E633
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P313: Individual differences in neural representations of face dimensions: Insights from Super-Recognisers
Tuesday July 8, 2025 17:00 - 19:00 CEST
P313 Individual differences in neural representations of face dimensions: Insights from Super-Recognisers

Martina Ventura*1, Tijl Grootswagers1,3, Manuel Varlet1,2, David White2, James D. Dunn2, Genevieve L. Quek1

1The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
2School of Psychology, Western Sydney University, Sydney, Australia
3School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney Australia
4School of Psychology, The University of New South Wales, Sydney, Australia

*E-mail:martina.ventura@westernsydney.edu.au

Introduction
Face processing is crucial for social interaction, with faces conveying information about identity, emotions, sex, age, and intentions [1]. Recent research has revealed significant individual differences in face recognition ability, with some people displaying exceptional face recognition skills – defined as Super-Recognisers [2,3].However, the brain mechanisms underpinning their superior ability remain unknown, including whether their exceptional face recognition is restricted to identity or also extends to other face dimensions such as sex and age.

Methods
Here we use Electroencephalography (EEG) to investigate the neural processes underlying face dimensions representations in Super-Recognisers (N = 12) and Typical-Recognisers (N = 17).We recorded 64 channel EEG while participants saw 400 naturalistic face images (40 distinct identities stratified by sex, age, and ethnicity) in a rapid 5 Hz randomized stream. We used Multi-Variate Pattern Analysis to measure the strength and temporal dynamics of neural encoding of different facial dimensions in both Super-and Typical-Recognisers.


Results
Our results showed that face identity decoding was stronger forSuper-RecognisersthanTypical-Recognisersstarting around 300ms - a time window typically associated with identity-related processing. In contrast, no differences were found between groups’ decoding profiles for face age, face sex, or face ethnicity.
Discussion
These results suggest that theSuper-Recognisersadvantage may be limited to face identity processing, rather than reflecting a general advantage in face dimension processing. These findings provide a crucial first step toward understanding the neural mechanisms underlying their exceptional face recognition ability.





Acknowledgements
We sincerely appreciate the time and effort of all the participants in this study. Your willingness to take part was essential in making this research possible. Thank you for your valuable contribution.
References
1.Tsao, D. Y., & Livingstone, M. S. (2008). Mechanisms of face perception.Annual review of neuroscience,31, 411–437.https://doi.org/10.1146/annurev.neuro.30.051606.094238

2.Russell, R., Duchaine, B., & Nakayama, K. (2009). Super-recognizers: people with extraordinary face recognition ability.Psychonomic bulletin & review,16(2), 252–257.https://doi.org/10.3758/PBR.16.2.252

3.Dunn, J. D., Summersby, S., Towler, A., Davis, J. P., & White, D. (2020). UNSW Face Test: A screening tool for super-recognizers.PloS one,15(11), e0241747. https://doi.org/10.1371/journal.pone.0241747
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P314: Neural compensation drives functional resilience in a cerebellar model of schizophrenia
Tuesday July 8, 2025 17:00 - 19:00 CEST
P314 Neural compensation drives functional resilience in a cerebellar model of schizophrenia

Alberto A. Vergani*1, Pawan Faris1, Claudia Casellato1, Marialaura De Grazia1and Egidio U. D'Angelo1,2

1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
2Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: albertoarturo.vergani@unipv.it

Introduction

Schizophrenia (SZ) affects ~1% of the global population (~24 million) [1]. While cortical and subcortical alterations are well-documented, the cerebellum’s role in cognitive dysfunction (CAS) remains underexplored [2]. SZ-related cerebellar degeneration involves neuron loss, reduced dendritic complexity, and weakened connectivity [3], often countered by compensatory hyperconnectivity [4-8]. Following the 'cognitive dysmetria' hypothesis [9], this study quantifies structural and functional changes in a cerebellar network model under atrophy and compensatory synaptogenesis [10-11].


Methods
Using Brain Scaffold Builder (BSB, [12]), we implemented an atrophy algorithm in a mouse cerebellum model, modulating cellular and network changes via the Atrophy Factor (AF, 0–60%).By preserving electrical cell properties, it simulated schizophrenic neurodegeneration while ensuring anatomical plausibility. Atrophy induced morphological shrinkage, dendritic pruning, radius reduction, neural density loss, and cortical thinning. Changes were quantified via apoptosis, dendritic complexity index (DCI, [13]), and connectivity metrics. Compensation via synaptogenesis increased synapse count with AF. The altered connectome (~30K neurons, EGLIF, [14]) was simulated in NEST [15] under baseline conditions (4 Hz mossy fiber stimulation) to assess firing rate changes.

Results
Atrophy altered network structure, reducing neurons, dendritic complexity, connectivity, and synapse count. Compensation offset this by increasing synapses in survived neuron pairs. Functional changes emerged from structural alterations, with excitability rising, reversing at ~10% AF, and zero-crossing at ~25% AF. Granule and Golgi cells showed opposite trends, while Purkinje, stellate, and basket cells were similar in firing change. DCN-I neurons gradually reduced activity, with compensation lightly delaying decline. DCN-P exhibited the highest resilience until ~25% AF, where compensation collapsed, triggering a firing surge disrupting output to telencephalon.

Discussion
This study examined cerebellar network degeneration while preserving electrical properties, highlighting structural changes, synaptic reorganization, and atrophy-related firing dynamics.Synaptic compensation mitigates pathology-driven neuronal damage, with a transition from hyper- to hypo-excitability, particularly in DCN-P, resembling Stern’s inflection point in neurodegenerative resilience [16]. Future work will explore atrophy-compensation effects on stimulus decoding and learning (eye blink conditioning, [17]), integrate with The Virtual Brain [18], compare with MEA recordings [19], and test therapeutic strategies like TMS and pharmacological interventions to enhance cognitive reserve.





Acknowledgements
Work supported by #NEXTGENERATIONEU (NGEU) and funded by MUR, National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) – A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022). The VBT Project has received funding from the European Union's Research and Innovation Program Horizon Europe under grant agreement No 101137289.
References
1 10.1001/jamapsychiatry.2019.3360
2 10.3389/fncel.2024.1386583
3 10.1016/j.biopsych.2008.01.003
4 10.1093/schbul/sbac120
5 10.1038/s41398-023-02512-4
6 10.1016/j.pscychresns.2018.03.010
7 10.1016/j.schres.2022.12.041
8 10.1038/s41386-018-0059-z
9 10.1093/oxfordjournals.schbul.a033321
10 10.1007/s12311-019-01091-9
11 10.1523/JNEUROSCI.0379-23.2023
12 10.1038/s42003-022-04213-y
13 10.1038/s42003-023-05689-y
14 10.3389/fninf.2018.00088
15 10.5281/ZENODO.4018718
16 10.1016/j.neurobiolaging.2022.10.015
17 10.3389/fpsyt.2015.00146
18 10.1093/nsr/nwae079

19 10.1371/journal.pcbi.1004584
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P315: Towards brain scale simulations using NEST GPU
Tuesday July 8, 2025 17:00 - 19:00 CEST
P315 Towards brain scale simulations using NEST GPU

José Villamar*1,2, Gianmarco Tiddia3, Luca Sergi3,4, Pooja Babu1,5, Luca Pontisso6, Francesco Simula6, Alessandro Lonardo6, Elena Pastorelli6, Pier Stanislao Paolucci6, Bruno Golosio3,4, Johanna Senk1,7

1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, Monserrato, Italy
4Dipartimento di Fisica, Università di Cagliari, Monserrato, Italy
5Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Jülich Research Centre, Jülich, Germany
6Istituto Nazionale di Fisica Nucleare, Sezione di Roma, Roma, Italy
7Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
*Email:j.villamar@fz-juelich.de


Introduction

Efficient simulation of large-scale spiking neuronal networks is important for neuroscientific research, and both the simulation speed and the time it takes to instantiate the network in computer memory are key factors. NEST GPU is a GPU-based simulator under the NEST Initiative written in CUDA-C++ that demonstrates high simulation speeds with models of various network sizes on single-GPU and multi-GPU systems [1,2,3]. On the path toward models of the whole brain, neuroscientists show an increasing interest in studying networks that are larger by several orders of magnitude. Here, we show the performance of our simulation technology with a scalable network model across multiple network sizes approaching human cortex magnitudes.
Methods
For this, we propose a novel method to efficiently instantiate large networks on multiple GPUs in parallel. Our approach relies on the deterministic initial state of pseudo-random number generators (PRNGs). While requiring synchronization of network construction directives between MPI processes and a small memory overhead, this approach enables dynamical neuron creation and connection at runtime. The method is evaluated through a two-population recurrently connected network model designed for benchmarking an arbitrary number of GPUs while maintaining first-order network statistics across scales.
Results
The benchmarking model was tested during an exclusive reservation of the LEONARDO Booster cluster. While keeping constant the number of neurons and incoming synapses to each neuron per GPU, we performed several simulation runs exploiting in parallel from 400 to 12,000 (full system) GPUs. Each GPU device contained approximately 281 thousand neurons and 3.1 billion synapses. Our results show network construction times of less than a second using the full system and stable dynamics across scales. At full system scale, the network model was composed of approximately 3.37 billion neurons and 37.96 trillion synapses (~25% human cortex).

Discussion
To conclude, our novel approach enabled network model instantiation of magnitudes nearing human cortex scale while keeping fast construction times, on average of 0.5s across trials. The stability of dynamics and performance across scales obtained in our model is a proof of feasibility paving the way for biologically more plausible and detailed brain scale models.




Acknowledgements
ISCRA for awarding access to the LEONARDO supercomputer (EuroHPC Joint Undertaking) via theBRAINSTAIN - INFN Scientific Committee 5 project, hosted by CINECA (Italy); HiRSE_PS, Helmholtz Platform for Research Software Engineering - Preparatory Study (2022-01-01 - 2023-12-31), the Horizon Europe Grant 101147319, Joint lab SMHB; FAIR CUPI53C22001400006 Italian PNRR grant.
References

https://doi.org/10.3389/fncom.2021.627620

https://doi.org/10.3389/fninf.2022.883333

https://doi.org/10.3390/app13179598


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P316: Characterization of thalamic stimulus on a cortical column
Tuesday July 8, 2025 17:00 - 19:00 CEST
P316 Characterization of thalamic stimulus on a cortical column

Pablo Vizcaíno-García*1,2,3, Fernando Maestú1,3, Alireza Valizadeh1 Gianluca Susi1,2,3

1Zapata-Briceño Institute for Human Intelligence, Madrid, Spain.
2Department of Structure of Matter, Thermal Physics and Electronics, School of Physics, Complutense University of Madrid, Madrid, Spain
3Center for Cognitive and Computational Neuroscience, Complutense University of Madrid, Madrid, Spain

*Email: pabvizca@ucm.es

Introduction

Cortical columns are fundamental organizational units in cerebral cortical processing and development [1]. They regularly receive external stimuli, coming from both higher-order areas and the thalamus. Different hypotheses have been proposed regarding the function of the thalamus: it is considered to act as a generator of the alpha rhythm [2]; and it is also said to play a potential role in sensory gating. In this work we focus on exploring the latter process. We investigate how stimuli propagate from one layer of the cortical column to the entire unit, examining how alpha and gamma rhythms may be disrupted or enhanced across the different layers. We build upon the design of a cortical column by Potjans & Diesmann [3].

Methods
We implemented an interconnected set of full-spiking cortical columns, each column encompassing 80000 neurons and 0.3 billion synapses. The connections have been derived from experimental data, utilising diffusion magnetic resonance imaging data. The column’s background stimulus was modified in order to start in a high-coherence state, which more easily allows the characterisation of the response. Said characterisation has been done by injecting a pulse packet into L4E, and obtaining order PRC (Phase Response Curves) which characterise the delays produced by the same stimulus if injected into different phases of the activity [4]. A 1ms wide stimulus was injected into different phases of the gamma period of L4E.
Results
The phases were identified after a gamma band filter (45-80Hz) and applying the Hilbert transform to the cortical column in the absence of a stimulus. The resulting PRC curves can be observed in Fig. 1. This figure presents both the raster plot of a stimulated cortical column and the resulting PRC curve. Each dot of the figure represents one spike of one neuron, in the appropriate time, and the superimposed line is the gamma-filtered population activity. From this figure we can observe there is a sudden halt in the gamma band after stimulation in L4E. The PRC curve was computed as an ensemble average of 10 trials. From this figure we highlight L23E as the only population which is consistently delayed by the input stimulus.
Discussion
From these early results, two main facts have become evident. First, the burst suppression phenomena emerges as a response to stimulating L4E, a layer that mostly receives its inputs from the thalamic nuclei. Second, the PRC of L23E shows the biggest time lag. Another avenue to explore is the variation of these curves in a less coherent network state. This work will seek to elucidate the mechanisms behind both of these phenomena, applying and comparing the results with the well-studied thalamocortical feedback loop. The investigation will contribute to a better understanding of the cortical column dynamics, but additionally will help clarify the effects of the communication between the thalamus and the cortical cortex.




Figure 1. Left: Raster plot of cortical column activity . Each dot represents a spike, and each colour a neuronal population. Imposed over the plot is the activity of each population, computed using a gaussian window over spike times, and normalised for the plot. Right: Phase response curve. Measures the time where each population reaches the first maximum in activity after stimulation injection into L4E.
Acknowledgements
This work was supported by Zapata-Briceño Intstitute of Science.
References
1.https://doi.org/10.1016/B978-0-12-814411-4.00005-6
2. hhtps://doi.org/10.34734/FZJ-2023-02822
3.https://doi.org/10.1093/cercor/bhs358
4.https://doi.org/10.3389/fninf.2010.00006
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P317: Critical dynamics improve performance in deep learning
Tuesday July 8, 2025 17:00 - 19:00 CEST
P317 Critical dynamics improve performance in deep learning


Simon Vock*1,2,3,4,5, Christian Meisel1,2,4,5,6

1Computational Neurology Lab, Department of Neurology, Charité – Universitätsmedizin, Berlin, Germany
2Berlin Institute of Health, Berlin, Germany
3Faculty of Life Sciences, Humboldt University Berlin, Germany
4Bernstein Center for Computational Neuroscience, Berlin, Germany
5NeuroCure Cluster of Excellence, Charité – Universitätsmedizin, Berlin, Germany
6Center for Stroke Research, Berlin, Germany

*Email: simon.vock@charite.de
Introduction

Deep neural networks (DNNs) have revolutionized AI, yet their vast parameter space makes training difficult, often leading to inefficiencies or failure. Their optimization remains largely heuristic, relying on trial-and-error design [1,2]. In biological networks, recent evidence suggests that critical phase transitions - balancing signal propagation to avoid die-out or runaway excitation - are key for effective learning [3,4]. Inspired by this, we analyze 80 modern DNNs and uncover a fundamental link between performance and criticality, unifying diverse architectures under a single theoretical perspective. Building on this, we propose a novel training approach that guides DNNs toward criticality, enhancing performance on multiple datasets.
Methods
We characterize criticality in DNNs using three key metrics: A maximum dynamic range Δ [5], a branching parameter σ=1 [6], and the largest Lyapunov exponent λ₀=0 [7]. Our statistical analysis employs multiple tests including Spearman's rank, Wilcoxon signed-rank, Mann-Whitney U, and linear mixed-effects models. We investigate 80 highly optimized DNNs from TorchVision pre-trained on the ImageNet-1k dataset [8]. We use the Modified National Institute of Standards and Technology (MNIST) dataset, a standard benchmark for computer vision. Building on our findings, we develop a novel training objective that specifically drives models toward criticality during the training process.
Results
We derive a set of measures quantifying the distance to criticality on DNNs and analyze 80 pre-trained DNNs from Torchvision (ImageNet-1k). We found that over the last decade, as test accuracies increased, networks became significantly more critical. Our analysis shows that test accuracies are highly correlated with criticality and model size. A linear mixed-effects model shows that distance to criticality and model size explain 60% of the variance in accuracy (R²). A novel training objective that penalizes distance to criticality improves MNIST accuracy by up to 0.8% compared to highly optimized DNNs. In a continual learning setting using ImageNet, this approach enhances neuronal plasticity and outperforms established training techniques.
Discussion
Analyzing 80 diverse DNNs developed over the last decade, we uncover two key ingredients for high-performance deep learning: Network size and critical neuron dynamics. We find that modern deep learning techniques implicitly enhance criticality, driving recent advancements in the field. We show how improved DNN architectures and training approaches promote criticality, and further introduce a novel training method that enforces criticality during training. This significantly boosts accuracy on MNIST. Additionally, our method enhances the network’s plasticity, improving adaptability to new information in continual learning. We expect these findings to generalize to other models and tasks, offering a path toward more efficient AI.



Acknowledgements

References
1. Glorot, X., & Bengio, Y. (n.d.). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), 9.
2.https://doi.org/10.1038/nature14539
3.https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003
4.https://doi.org/10.1016/0167-2789(90)90064-V
5.https://doi.org/10.1523/JNEUROSCI.3864-09.2009
6.https://doi.org/10.1103/PhysRevLett.94.058101
7.https://doi.org/10.1103/PhysRevLett.132.057301
8.https://doi.org/10.1109/CVPR.2009.5206848
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P318: Algorithmic solutions for spike-timing dependent plasticity in large-scale network simulations with long axonal delays
Tuesday July 8, 2025 17:00 - 19:00 CEST
P318 Algorithmic solutions for spike-timing dependent plasticity in large-scale network simulations with long axonal delays

Jan N. Vogelsang*1,2, Abigail Morrison*2,3, Susanne Kunkel1

1 Neuromorphic Software Ecosystems (PGI-15), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany

*Email: j.vogelsang@fz-juelich.de
Introduction

The precise timing of neuronal communication is a cornerstone in understanding learning and synaptic plasticity. Spike-timing dependent plasticity (STPD) models in particular rely on the precise temporal difference between pre- and post-synaptic spikes to adjust synaptic strength, where both the diverse axonal propagation delays and dendritic backpropagation delays play a crucial role in determining the precise timing. However, neural simulators, such as NEST, have traditionally represented transmission delays between neurons as a single aggregate delay value because of algorithmic challenges. We present two simulation frameworks addressing these challenges and validate across a set of small- to large-scale benchmarks.

Methods
The NEST simulator reference implementation currently treats the entire delay as dendritic, which allows performing synaptic strength adjustments immediately after the occurrence of a pre-synaptic spike, avoiding costly buffering of spikes. This is an acceptable approximation for small networks but leads to inaccuracies when modeling long-range connections. In this framework, introducing axonal delays causes causality issues. At the time a pre-synaptic spike occurs, post-synaptic spikes only occurring in future time steps might reach the synapse before such spike due to predominant axonal delays. In order to mitigate this issue, one must either correct the weight on later occurrence of such post-synaptic spikes or postpone the STDP update.
Results
Both approaches were implemented and rigorously benchmarked in terms of runtime efficiency and memory footprint for varying synaptic delays and delay partitions. Correcting faulty synaptic updates achieves exceptional performance for fractions of axonal delay equal to or lower than the corresponding dendritic one. Only in the case of predominant and long axonal delays it starts to be outperformed by the alternative approach, which however required fundamental changes to the simulation framework to enable efficient buffering of individual spikes at the synapse level. However, benchmarks show that the approach induces a negative impact on performance for simulations not involving STDP dynamics, unlike the correction-based approach.
Discussion
Although different axonal and dendritic contributions are known to bias the synaptic drift towards either systematic potentiation or depression, there is a lack of simulation studies investigating the effects on network dynamics and learning in large neuronal systems. The ability to differentiate between axonal and dendritic delays represents a significant advance in neural simulation technology, as it addresses a long-standing limitation in spike-timing dependent plasticity modeling in large-scale, distributed simulations and enables future research in learning and plasticity, in particular, investigations of brain-scale models with STDP faithfully representing heterogeneous long axonal delays between areas.




Acknowledgements
I want to thank Dennis Terhorst and Anno Kurth for assistance in benchmarking and running all the required jobs on the HPC systems.
References
-
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P319: Interactions between functional microcircuits involving three inhibitory interneuron subtypes for the surround modulation in V1
Tuesday July 8, 2025 17:00 - 19:00 CEST
P319 Interactions between functional microcircuits involving three inhibitory interneuron subtypes for the surround modulation in V1

Nobuhiko Wagatsuma*1, Tomoki Kurikawa2, Sou Nobukawa3,4

1Faculty of Science, Toho University, Funabashi, Chiba, Japan
2Future University Hakodate, Hakodate, Hokkaido, Japan
3Department of Computer Science, Narashino, Chiba Institute of Technology, Chiba, Japan
4Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, National Center of Neurology and Psychiatry, Kodaira, Tokyo, Japan

*Email: nwagatsuma@is.sci.toho-u.ac.jp

Introduction
A functional microcircuit of V1 for interpreting the external world resides in layers 2/3 and consists of excitatory pyramidal (Pyr) neurons and three inhibitory interneuron subtypes: parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal polypeptide (VIP). Recent physiological and computational studies suggest a structured organization of this microcircuit and distinct roles of inhibitory interneuron subtypes in modulating neural activity for visual perception [1,2]. Interactions between these microcircuits across receptive fields are crucial for integrating larger visual regions and forming perception, yet the precise structures and interneuron subtypes mediating these interactions remain unclear.

Methods
We developed a computational microcircuit model of the functional unit with biologically plausible visual cortical layers 2/3 that combined excitatory Pyr neurons and three inhibitory interneuron subtypes and explored the role of specific inhibitory interneuron subtype for mediating the interactions between these two microcircuits via the lateral inhibition across receptive fields (Fig.(A)). We assumed that the receptive fields of these units, which share common orientation selectivity, are spatially adjacent in the visual field. In this study, two functional microcircuits interacted each other via the lateral inhibition from excitatory Pyr neurons in one unit to PV or SOM inhibitory interneurons in the other.
Results
We performed simulations of the model with inputs mimicking small and large visual stimuli used in the physiological experiment [3]. We assumed that the small stimulus was confined to the receptive field of a single unit, whereas the large stimulus extended across the receptive fields of two microcircuits. Model simulations with the large visual stimulus implied that the lateral inhibition from Pyr neurons in one microcircuit to SOM interneurons in the other preferentially induced neuronal firing at beta (13-30 Hz) frequency, in agreement with physiological responses for the surround suppression in V1 [3]. By contrast, the model with the lateral inhibition mediated by PV interneurons distinct modulation patterns from physiological results.
Discussion
Our model reproduced characteristic neuronal activities in V1 induced by the surround modulation when the lateral inhibition across the receptive fields was mediated by SOM interneurons. Our results of model simulations suggested the specific role of SOM interneurons in the long-range lateral interactions across receptive fields in V1, which might contribute to the generation of surround modulation.



Figure 1. (A) Proposed microcircuit model. These two microcircuits interacted each other via lateral connections from Pyr neurons in one unit to PV or SOM interneurons in the other. (B) Simulation results. Black and blue lines indicated the oscillatory responses of the model with the lateral inhibition mediated by SOM and PV interneurons, respectively. The red line was those with the small stimulus.
Acknowledgements
This work was partly supported by the Japanese Society for the Promotion of Science (JSPS) (KAKENHI grants 22K12138, 22K12183, 23H03697, and 23K06394) and a grant of the Research Initiative Program of Toho University (TUGRIP).
References
1. https://doi.org/10.1093/cercor/bhac355
2. https://doi.org/10.1016/j.celrep.2018.10.029.
3. https://doi.org/10.1038/nn.4562.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P320: Updating of spatial memories in a systems-model of vector trace cells in the subiculum
Tuesday July 8, 2025 17:00 - 19:00 CEST
P320 Updating of spatial memories in a systems-model of vector trace cells in the subiculum

Fei Wang*1, Andrej Bicanski1

1Department ofPsychology,Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany


*Email:wangf@cbs.mpg.de


Introduction
The Subiculum (Sub) is known as the output layer of the hippocampal formation, and contains boundary vector cells (BVCs), firing for boundaries at specific allocentric directions and distances1, 2. More recently it has been shown that Sub vector cells can exhibits traces that persists for hours after boundary/object removal1 (Fig. 1a).Prior models suggest that such traces can be evoked by place cells (PCs), which index boundary/object presence at encoding2. Vector trace cells (VTCs) mainly occur in the distal Sub (dSub). However, an account of proximo-distal differences remains absent. Here we propose that vector trace cell coding in Sub provides a mismatch signal to update spatial memory representations.

Methods
In our model (Fig. 1b) dSub neurons receive feedforward input from either direct sensory information (BVCs in pSub) or mnemonic information (PCs in CA1). Mismatch between these inputs updates CA1-dSub synapses, with different dSub units having varying updating rates. Following the hypothesized CA1–Sub proximal–distal pathway3, which is implicated in spatial memory specialization, we show how inserted cues affect distal and proximal CA1 (dCA1, pCA1) and their corresponding dSub units. In this model, space-related pCA1 PCs transfer mnemonic information to dSub, while object-related dCA1 PCs exhibit place field drift toward the inserted cue, influencing the probability of synaptic updates between pCA1 and dSub units.
Results
We find that our mismatch-dependent learning model accounts for known VTC properties1, including: (i) the distribution of VTCs along the proximodistal axis, (ii) the percentage of VTCs across different cue types, and (iii) hours-long persistence of vector trace. (iv) By enriching CA1 representations, our model further explains additional empirical findings, including object-centered population coding in CA13. (v) VTCs have longer tuning distances after cue removal.
Discussion
Our model suggests that mismatch detection for updating of associative memory suggests mechanistic explanations for findings in the CA1-Sub pathway, and predicts a function for the Sub in coordinating spatial encoding and memory retrieval. Additionally, it describes the distinctive neural coding for novel objects and familiar contexts and their impacts on memory retrieval. Our work constitutes the first dedicated circuit-level model of computation within the Sub and provides a potential framework to extend the standard model of hippocampal function with a Sub component.



Figure 1. Fig. 1 (a) Experimental Procedure. Rats foraged for food while Sub neurons were recorded. Heatmaps show firing rates as a function of the rat's position (adapted from Poulter et al., 2021). (b) Our model has a perceptual pathway (pSub-dSub) and a memory pathway (CA1-dSub). Arrow widths represent connection strength. dSub units update CA1-dSub weights at varying rates, shown by different colors.
Acknowledgements
AB and FW acknowledge funding from the Max-Planck Society. Additionally, we thank Colin Lever at Durham University for insightful discussions, valuable advice, and access to preliminary data.
References
● Poulter, S., Lee, S. A., Dachtler, J., Wills, T. J., & Lever, C. (2021).Vector trace cells in the subiculum of the hippocampal formation.Nature neuroscience,24(2), 266-275.
● Bicanski, A., & Burgess, N. (2018). A neural-level model of spatial memory and imagery.elife,7, e33752.
● Vandrey, B., Duncan, S., & Ainge, J. A. (2021). Object and object‐memory representations across the proximodistal axis of CA1.Hippocampus,31(8), 881-896.








Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P321: Overcoming the space-clamp effect: reliable recovery of local and effective synaptic conductances of neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P321 Overcoming the space-clamp effect: reliable recovery of local and effective synaptic conductances of neurons

Ziling Wang1,2,3, David McLaughlin*4,5,6,7,8, Douglas Zhou*1,2,3, Songting Li*1,2,3
1School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
2Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
3Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240, China
4Courant Institute of Mathematical Sciences, New York University, New York, New York 10012
5Center for Neural Science, New York University, New York, New York 10012
6New York University Shanghai, Shanghai 200122, China
7NYU Tandon School of Engineering, New York University, Brooklyn, NY 11201
8Neuroscience Institute of NYU Langone Health, New York University, New York, NY 10016

*Email: david.mclaughlin@nyu.edu, zdz@sjtu.edu.cn, or songting@sjtu.edu.cn
Introduction

To understand the interplay between excitatory (E) and inhibitory (I) inputs in neuronal networks, it is necessary to separate and recover E from I inputs. Somatic recordings are more accessible than those from local dendrites, which poses challenges in recovering input characteristics and distinguishing E from I after dendritic filtering. Somatic voltage clamp methods [1,2] address these issues by assuming an iso-potential neuron. However, as shown in Fig. 1A, this assumption is debated, as the voltage is nonuniform across neurons due to complex morphology [3]. This nonuniform voltage, known collectively as the space clamp effect, leads to inaccurate conductance estimations and even yields erroneous negative conductances [4].


Methods
We study mathematical models of voltage clamping, beginning with an asymptotic analysis of an idealized cable neuron model with realistic time-varying synaptic inputs, and then extending the analysis to simulations of realistic model neurons with varying types, morphologies, and active ion channels. The asymptotic analysis describes in detail the response of the idealized neuron under somatic clamping, and thus captures the discrepancy between the local synaptic conductance on the dendrite, the effective conductances at the soma and the traditional voltage clamp approximation. This discrepancy arises primarily due to the traditional approach’s oversight of the space clamp effect.

Results
With this detailed quantitative understanding of neural response, we refine the traditional method to circumvent the space clamp effect, thus enabling accurate recovery of local and effective conductances from somatic measurements. Specifically, we develop a two-step clamp method that separately recovers the mean and time constants of local conductance on the dendrite when a neuron receives a single synaptic input. Besides, under in-vivo conditions of multiple inputs, we propose an intercept method to extract effective net E and I conductances. Both methods are grounded in perturbation analyses and validated using biologically detailed multi-compartment neuron models with active channels included, as shown in Fig. 1B-1D.

Discussion
Our methods consistently achieve high accuracy in estimating both local and effective conductances through simulations involving various realistic neuron models. Accuracy holds over a broad range of synaptic input strengths, input locations, ionic channels, and receptors. However, two factors can degrade accuracy: large EPSPs and active HCN channels. Large EPSPs, particularly at dendritic tips, require higher-order corrections beyond first-order perturbation theory. Besides, HCN channels also reduce accuracy, but blocking them restores precision. Our approach is robust across various neuron types, as demonstrated in simulations of mPFC fast-spiking neurons, cerebellar Purkinje neurons, and hippocampal pyramidal neurons.





Figure 1. Performance of our method for recovering local and effective conductances in a realistic neocortical layer 5 pyramidal neuron model. (A) Voltage distribution across the pyramidal neuron under somatic voltage clamp condition. (B–D) Our methods perform well in estimating local synaptic conductance features—the mean (B) and time constant (C), as well as the effective conductance at the soma (D).
Acknowledgements
This work was supported by Science and Technology Innovation 2030-Brain Science and Brain-Inspired Intelligence Project (No.2021ZD0200204 D.Z., S.L.); Science and Technology Commission of Shanghai Municipality (No.24JS2810400 D.Z.); National Natural Science Foundation of China (No.12225109, 12071287 D.Z.; 12271361, 12250710674 S.L.) and Student Innovation Center at SJTU (Z.W., D.Z. and S.L.).
References
[1].https://doi.org/10.1038/30735
[2].https://doi.org/10.1016/j.neuron.2011.12.013
[3].https://doi.org/10.1038/nrn2286

[4].https://doi.org/10.1038/nn.2137
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P322: Optogenetic inhibition of a hippocampal network model
Tuesday July 8, 2025 17:00 - 19:00 CEST
P322 Optogenetic inhibition of a hippocampal network model



Laila Weyn*1,2, Thomas Tarnaud1,2, Wout Joseph1, Robrecht Raedt2, Emmeric Tanghe1
1WAVES, Department of Information Technology (INTEC), Ghent University/IMEC, Technologiepark 126, 9000 Ghent, Belgium
24BRAIN, Department of Neurology, Institute for Neuroscience, Ghent University, Corneel Heymanslaan 10, 9000 Gent, Belgium
*Email: laila.weyn@ugent.be


Introduction

Optogenetic inhibition of the hippocampus has emerged as a promising approach for suppressing seizures associated with temporal lobe epilepsy (TLE). Given the substantial size of the hippocampus and the inherent challenges of light propagation within the brain, understanding the influence of the volume and nature of the targeted region is crucial. To address these challenges, anin silicoapproach has been developed; allowing systematic exploration of the impact of different target regions on the effectiveness of optogenetic inhibition of seizure like activity in the hippocampus.
Methods
The hippocampal model described by Aussel et al. (2022) was modified and implemented in NEURON [1,2]. A photocurrent described by the double two-state opsin model was added to excitatory neurons of the Dentate Gyrus (DG_E) and Cornu Ammonis 1 (CA1_E) [3]. The impact of hippocampal sclerosis (HS) and mossy fiber sprouting (MFS) modelling [1] on excitability was assessed via an I/O curve of the CA1_E response to DG_E stimulation. Uncontrolled, self-sustaining, high frequency activity was induced in an epileptic network (MFS = 0.9) by reducing the inhibitory component of the EC theta input (see Fig. 1A). The effect of the target region on optogenetic inhibition was studied by varying the number of CA1_E and DG_E cells receiving a light pulse.
Results
The steeper slope of the population response curve suggests that increased MFS correlates with enhanced excitability. For HS, an inverse relationship is observed (Fig. 1B). When 100% of both DG_E and CA1_E regions is illuminated, all activity within the epileptic network is suppressed (Fig 1C.). Reducing the illumination of DG_E allows the network activity to return to theta activity. Notably, illumination of DG_E alone is insufficient to suppress high-frequency firing. These findings indicate that CA1 serves as a better target region for inhibiting hippocampal activity.
Discussion
The results regarding HS and MFS are in line with those observed by Aussel et al. (2022), though a different type of seizure-like activity is generated. Furthermore, the study shows the importance of selecting the appropriate stimulation region to effectively suppress hippocampal seizures. This preliminary investigation explores the capabilities of the network model but further investigation into the generation of seizure-like activity is necessary. Future work will aim for experimental validation of the model generated seizure-like activity and its response to optogenetic inhibition, with the ultimate aim of optimizing stimulation protocols.





Figure 1. A. Healthy and epileptic network response to EC theta current input and optogenetic modulation of CA1 and DG. B. Population response of CA1_E as a function of DG_E activity after stimulation at varying MFS and HS levels. C. Spike count in CA1_E and DG_E populations during optogenetic modulation (t = 1.75:2.25s) of varying amounts of neurons.
Acknowledgements

This work is supported by BOF project SOFTRESET.


References

[1]https://doi.org/10.1007/s10827-022-00829-5
[2]https://doi.org/10.1017/CBO9780511541612

[3]https://doi.org/10.3389/fncom.2021.688331





Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P323: Modularity and inhibition: the transition from burst-suppression to healthy EEG signals in a microscale model
Tuesday July 8, 2025 17:00 - 19:00 CEST
P323 Modularity and inhibition: the transition from burst-suppression to healthy EEG signals in a microscale model

Guido Wiersma1,*, Michel van Putten1,2, Nina Doorn1
● Department of Clinical Neurophysiology, University of Twente, 7500 AE Enschede, The Netherlands
● Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente, 7500 KA Enschede, The Netherlands



* email: wiersmaguido@gmail.com
Introduction

Burst-suppression (BS) is an electroencephalogram (EEG) pattern consisting of high voltage patterns (bursts, >20 µV) alternated with low voltage or even isoelectric periods (suppression)[1]. It can be categorized into BS with identical bursts, observed in comatose patients after brain ischemia indicating poor prognosis, and BS with heterogeneous bursts[2]. Where past research did not identify the neural origin of BS, recent work showed that the shift of heterogeneous to identical BS is caused by the loss of either inhibition, or modularity in the connectivity between neurons[3]. Here, we hypothesize that when both inhibition and modularity are included in a network, the transition of BS to a healthy network state can be modelled.

Methods
To simulate the pathological and healthy states, a network of 2000 adaptive integrate and fire (IF) neurons is constructed. Such networks are known to generate both BS and a wide variety of healthy characteristics as observed in EEG (e.g. alpha or gamma activity)[4]. The adaptation mechanism of the IF neurons is conductance-based, preventing unrealistically negative membrane voltages during suppression periods as described in e.g.[5]. Inspired by Gao et al., simulation based inference is used to explore a wide variety of dynamics resulting from a broad range of free parameters[4].

Results
The results show the influence of inhibition and modularity on the simulation of BS and healthy network states. Furthermore, by using one channel EEG data as target observations for the parameter inference, combined with the broad parameter range, we show to what extent the proposed microscale model can simulate these target EEG signals.

Discussion
The roles of inhibition and modularity provide new insights into the mechanisms behind the transition of healthy brain states to BS. This opens potential pathways for treatments in comatose patients after ischemia. Although the model consists of only 2000 neurons, the striking similarity between BS patterns generated in-vitro and those observed in EEG recordings highlights the potential of microscopic models to capture features of large-scale brain activity[3,6,7]. This study demonstrates the potential of these biophysically detailed models to uncover cellular level insights from EEG signals.





Acknowledgements
We thank Maurice van Putten, PhD, for his invaluable support, expertise, and generous provision of the code to implement synaptic parallel computing for dynamic load balancing.
References
[1] 1. https://doi.org/10.1097/01.nrl.0000178756.44055.f6
[2] 2. https://doi.org/10.1016/j.clinph.2013.10.017
[3] 3.https://doi.org/10.12751/nncn.bc2024.146
[4] 4. https://doi.org/10.1101/2024.08.21.608969
[5] 5. https://doi.org/10.1162/neco_a_01342.
[6] 6. https://doi.org/10.1152/jn.00316.2002.

[7] 7. https://doi.org/10.1109/TBME.2004.827936.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P324: Brain Criticality Trajectories in Aging: From Cognitive Slowing to Hyperexcitability
Tuesday July 8, 2025 17:00 - 19:00 CEST
P324 Brain Criticality Trajectories in Aging: From Cognitive Slowing to Hyperexcitability

Kaichao Wu*1,Leonardo L. Gollo1,2

1Brain Networks and Modelling Laboratory, The Turner Institute for Brain and Mental Health, School of Psychological Sciences, and Monash Biomedical Imaging, Monash University, Victoria 3168, Australia
2Institute for Cross-Disciplinary Physics and Complex Systems, IFISC (UIB-CSIC), Palma de Mallorca, Campus University de les Illes Baleares, Spain.
*Email: kaichao.wu@monash.edu

Introduction

Brain criticality—the dynamic balance between stability and flexibility in neural activity—is a fundamental property that supports efficient information processing, adaptability, and cognitive function [1-3]. However, how aging influences brain criticality remains a subject of debate, with conflicting findings in the literature [4,5]. Some studies suggest that normal aging shifts neural dynamics toward a subcritical state characterized by reduced neural variability and cognitive slowing [6]. In contrast, others propose that aging may lead to supercritical dynamics, increasing the risk of hyperexcitability and instability[7].
Methods
To reconcile these opposing views, we developed a whole brain neuronal network model that simulates aging as a combination of two processes: healthy aging, which gradually prunes network connections at a steady rate(Figure 1A), and pathological aging, which introduces random lesions that locally alter regional excitability(Figure 1B). This model enables us to track how the distance to criticality(Figure 1C), estimated from temporal correlation length(intrinsic timescales), evolves over time. We find that healthy aging drives the system toward subcriticality, while pathological aging progressively pushes the system toward supercriticality due to lesion accumulation and compensatory excitability changes(Figure 1D).
Results
Our results reveal two distinct trajectories of criticality in aging. In normal aging, where no major disruptions occur, neural dynamics gradually shift toward subcriticality, aligning with empirical findings of diminished neural variability and cognitive slowing in older adults [5]. Conversely, in pathological aging, an initial decline in criticality due to network degradation is followed by a shift toward supercriticality, potentially contributing to hyperexcitable states observed in neurodegenerative diseases.
Discussion
These findings offer a theoretical framework that reconciles previously conflicting results, demonstrating that normal and pathological aging follow distinct criticality trajectories. By identifying key mechanisms underlying these transitions, our model provides insights into early detection of neurodegenerative diseases and highlights potential interventions aimed at preserving critical neural dynamics in aging populations.





Figure 1. Figure 1. Brain criticality trajectories in Aging. (A) Brain network connectivity (K) reduces with normal aging. (B) For pathological aging, excitability within localized brain regions increases. (C) The neuronal network modeling indicates two distinct trajectories for normal and pathological aging. (D) The relationship between intrinsic timescales and criticality.
Acknowledgements
This work was supported by the Australian Research Council (ARC), Future Fellowship (FT200100942), the Rebecca L. Cooper Foundation (PG2019402), the Ramón y Cajal Fellowship (RYC2022-035106-I) from FSE/Agencia Estatal de Investigación (AEI), Spanish Ministry of Science and Innovation, and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M.
References


1.Cocchi, L., et al. (2017).https://doi.org/10.1016/j.pneurobio.2017.07.002.
2.Munoz, M. A. (2018).https://doi.org/10.1103/RevModPhys.90.031001
3.O’Byrne, et al. (2022). https://doi.org/10.1016/j.tins.2022.08.007.
4.Zimmern, V. (2020). https://doi.org/10.3389/fncir.2020.00054
5.Heiney, K., et al. (2021).https://doi.org/10.3389/fncom.2021.611183
6.Wu, K., et al. (2025). https://doi.org/10.1038/s42003-025-07517-x.
7.Fosque, L. J., et al. (2022).https://doi.org/10.3389/fncom.2022.1037550

8.Garrett, D. D., et al. (2013).https://doi.org/10.1093/cercor/bhs055.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P325: Disrupted Temporal Dynamics in Stroke: A Criticality Framework for Intrinsic Timescales
Tuesday July 8, 2025 17:00 - 19:00 CEST
P325 Disrupted Temporal Dynamics in Stroke: A Criticality Framework for Intrinsic Timescales

Kaichao Wu*1, Leonardo L. Gollo1,2

1Brain Networks and Modelling Laboratory, The Turner Institute for Brain and Mental Health, School of Psychological Sciences, and Monash Biomedical Imaging, Monash University, Victoria 3168, Australia
2Institute for Cross-Disciplinary Physics and Complex Systems, IFISC (UIB-CSIC), Palma de Mallorca, Campus University de les Illes Baleares, Spain.
*Email: kaichao.wu@monash.edu
Introduction

Stroke profoundly disrupts brain function [1-3], yet its impact on temporal dynamics—critical for efficient information processing and recovery—remains poorly understood. Intrinsic neural timescales (INT), which quantify the temporal persistence of neural activity, offer a valuable framework for investigating these dynamic alterations [4,5]. However, the extent to which stroke influences INT and the mechanisms underlying these changes remain unclear.


Methods
This study leverages a longitudinal dataset comprising 15 ischemic stroke patients who underwent resting-state functional MRI at five evenly spaced intervals over six months. INT was computed by estimating the area under the positive autocorrelation function of BOLD signal fluctuations across whole-brain regions [6]. We compared stroke patients' INT values to those of age-matched healthy controls to assess lesion-induced disruptions. Additionally, we analyzed the hierarchical organization of INT across functional networks and examined its relationship with motor recovery, classifying patients into good and poor recovery groups based on clinical assessments. To explore potential mechanisms, we modeled networks of excitable spiking neurons using the Kinouchi & Copelli framework [6,7], investigating the causal relationship between neural excitability and INT within a criticality framework (Fig. 1).
Results
Our findings revealed that stroke patients exhibited significantly prolonged INT compared to healthy controls, a pattern that persisted across all recovery stages. The hierarchical structure of INT, which reflects balanced specialization across brain networks, was markedly disrupted in the early post-stroke phase. By two months post-stroke, differences in INT trajectories emerged between recovery groups, with poor recovery patients displaying abnormally prolonged INT, particularly in the dorsal attention, language, and salience functional networks. These findings align with theoretical predictions from excitable neuron network models, which suggest that stroke lesions may shift the brain’s dynamics toward criticality or even into the supercritical regime (Fig. 1).
Discussion
Our results indicate that stroke-induced INT prolongation reflects increased neural network excitability, pushing the brain toward criticality or even into a supercritical state. The persistent INT abnormalities observed in poorly recovering patients suggest that early-stage INT alterations could serve as prognostic biomarkers for long-term functional outcomes. These findings provide insights into stroke-induced disruptions of brain criticality and highlight the potential of non-invasive neuromodulatory interventions to restore normal INT and facilitate recovery [5]. By advancing our understanding of temporal dynamic changes in stroke, this work sheds light on post-stroke neural reorganization and opens new avenues for targeted rehabilitation strategies using non-invasive brain stimulation.




Figure 1. Figure 1. Stroke lesions prolong intrinsic neural timescales and alter network dynamics, shifting them from a slightly subcritical state (blue) toward criticality (red), with the potential to enter a supercritical state. Near a phase transition, cortical network dynamics can be modeled as a branching process, where intrinsic neural timescales peak at the critical point[6].
Acknowledgements
This work was supported by the Australian Research Council (ARC), Future Fellowship (FT200100942), the Rebecca L. Cooper Foundation (PG2019402), the Ramón y Cajal Fellowship (RYC2022-035106-I) from FSE/Agencia Estatal de Investigación (AEI), Spanish Ministry of Science and Innovation, and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M.
References
1.Carrera, E., & Tononi, G. (2014).https://doi.org/10.1093/brain/awu191
2.Park, C.-h., Chang, W. H., Ohn, S. H., et al. (2011). https://doi.org/10.1161/STROKEAHA.110.603846
3.Volz, L. J., Rehme, A. K., Michely, J., et al. (2016). https://doi.org/10.1093/cercor/bhv136
4.Golesorkhi, M., et al. (2021). https://doi.org/10.1038/s41522-021-00447-z
5.Gollo, L. L. (2019). https://doi.org/10.7554/eLife.45089.
6. Wu, K., & Gollo, L. L. (2025).https://doi.org/10.1038/s41522-025-00875-2

7.Kinouchi, O., & Copelli, M. (2006). https://doi.org/10.1038/nphys292
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P326: Modeling language evolution with spin glass dynamics
Tuesday July 8, 2025 17:00 - 19:00 CEST
P326 Modeling language evolution with spin glass dynamics

Hediye Yarahmadi*1, Alessandro Treves1

1Cognitive Neuroscience, SISSA, Trieste, Italy

*Email: hediye.yarahmadi@sissa.it

Introduction

Recent advances in phylogenetic linguistics by Longobardi and colleagues [1], based on syntactic parameters, seem to reconstruct language evolution farther in the past than traditional etymological approaches. Combined with quantitative statistics, this Parametric Comparison Method also raises general questions: why does syntax keep changing? Why do languages diversify instead of converging into efficient forms? And why is this change so slow, over centuries? We hypothesize that the fundamental reasons are disorder and frustration: syntactic parameters interact through disordered interactions, subject to weak external drives and, unable to settle into a state fully compatible with all interactions, they evolve slowly with “glassy” dynamics.

Methods
To explore such hypothesis, we model a “language” as a binary vector of the 94 syntactic parameters considered in the Longobardi database, and assume that they interact both through the explicit and asymmetric dependencies that linguists call “implications” (which may lead to rotating changes [2]) and through weak, partly asymmetric interactions, which we assign at random with a relative strengthσand a degree of asymmetryφranging from 0° (symmetric) to 90° (fully antisymmetric). Using Glauber dynamics, we simulate the evolution of these parameters, assuming external fields to only set the initial conditions. We then introduce a Hopfield-like symmetry component to the interaction term, expected to glassify syntax dynamics further.

Results
(Fig. 1a) sketches the (φ,σ,γ=0) phase diagram based on simulations of the average number of parameters flip at the 100thtime step. Syntactic parameters get trapped in a steady state (one of a disordered multiplicity) for low asymmetry, while they continue to evolve for higher asymmetry. The strength of random interactions is almost irrelevant, but when they dominate (σ→∞),the transition is sharp atφ=30°. For lowσ,dynamics slow, but atσ≡0 they continue indefinitely: implications alone allow no steady state. (Fig. 1b) presents the phase diagram in (φ=90°,σ,γ) space, showing a transition from a glassy to a chaotic state. The balance between symmetry and asymmetry is crucial, and a large γ stabilizes the system via the Hopfield term.

Discussion
The sharp transition atφ=30° forσ→∞ and γ→0aligns with previous studies of asymmetric spin glasses [3] (η=1/2 in their notation), indicating that varying the interaction symmetry induces a phase transition from glassy to chaotic dynamics. This suggests that to understand language evolution in the syntax domain it is essential to include, along the implicational structure constraining parameter changes, disordered interactions which have so far eluded linguistic analysis, in part because of their quantitative rather than logical nature. We are now working on integrating this Hopfield-like structure, which brings languages closer to metastable states.





Figure 1. Phase diagrams: (a) At γ=0 in the σ-φ plane, the system freezes with symmetric interactions (up to φ≈30°) and becomes fluid as asymmetry increases for large σ. Similar behavior occurs to σ→0, but with slower fluid dynamics, and with σ ≡ 0, it is chaotic. (b) At φ=90° in the σ-γ plane, freezing occurs for γ/σ > 0.01, becoming fluid as the Hopfield term decreases. Symmetry balance is key.
Acknowledgements
We would like to express our sincere gratitude to G. Longobardi for providing access to the database used in this study.
References
[1]Ceolin A, Guardiano C, Longobardi G et al (2021).At the boundaries of syntactic prehistory.Phil Trans Roy Soc B,376(1824), 20200197. http://doi.org/10.1098/rstb.2020.0197
[2] Crisma P, Fabbris G, Longobardi G & Guardiano C (2025).What are your values? Default and asymmetry in parameter states.J Historical Syntax,9, 1-26.https://doi.org/10.18148/hs/2025.v9i2-10.182

[3]Nutzel K & Krey U (1993). Subtle dynamic behaviour of finite-size Sherrington-Kirkpatrick spin glasses with nonsymmetric couplings.J Physics A: Math Gen,26, L591.https://10.1088/0305-4470/26/14/011.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P327: Deciphering the Dynamics of Memory Encoding and Recall in the Hippocampus Using Information Theory and Graph Theory
Tuesday July 8, 2025 17:00 - 19:00 CEST
P327 Deciphering the Dynamics of Memory Encoding and Recall in the Hippocampus Using Information Theory and Graph Theory

Jess Yu*1, Hardik Rajpal2, Mary Ann Go1, Simon Schultz1

1Department of Bioengineering and Centre for Neurotechnology, Imperial College London, United Kingdom, SW7 2AZ
2Department of Mathematics and Centre for Complexity Science, Imperial College London, United Kingdom, SW7 2AZ

*Email: jin.yu21@imperial.ac.uk

Introduction
Alzheimer's disease (AD) profoundly impairs spatial navigation, a critical cognitive function dependent on hippocampal processing. While previous studies have documented the deterioration of place cell activity in AD, the mechanisms by which AD disrupts information processing across neural populations remain not fully understood. Traditional analyses focusing on individual neurons fail to capture the collective properties of neural circuits. We hypothesized that AD pathology disrupts not only individual cellular encoding but also the integration and sharing of spatial information across functional neuronal assemblies, leading to compromised spatial navigation.
Methods
We analysed hippocampal CA1 neural recordings from two-photon calcium imaging from AD and wild-type (WT) mice, both young and old, during spatial navigation tasks in familiar environments and novel environments. At the single-cell level, we quantified spatial information using mutual information (MI) between neural spikes and location, and partial information decomposition (PID) [1] for the pair of neurons and location. For population-level analysis, we constructed functional networks using pairwise MI, identified stable functional neuronal assemblies using Markov Stability detection [2], and applied PID to quantify how assemblies collectively encode spatial information through redundancy, synergy, joint mutual information, and redundancy-Synergy Index.
Results
Our analysis revealed multi-scale disruption of spatial information processing in AD. At the single-cell level, AD-Old (ADO) mice showed significantly reduced spatially informative neurons and lower spatial information content. At the assembly level, we uncovered profound deficits in information integration. ADO assemblies showed significantly reduced redundancy and synergy compared to WT-Young controls, indicating impaired information sharing. The Redundancy-Synergy Index revealed a significant shift in the balance between redundant and synergistic processing across neural assemblies.
Discussion
These findings provide novel insights into how AD disrupts neural information processing across multiple scales. The parallel degradation of both cellular encoding and assembly-level information integration suggests a compound effect of AD pathology on spatial navigation circuits. The reduced information sharing between assemblies points to a breakdown in coordinated activity necessary for effective spatial navigation. This multi-scale information-theoretic approach reveals that AD impairs not just individual neural responses but the mechanisms by which neural assemblies integrate spatial information, potentially guiding development of assembly-level therapeutic strategies.



Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) through the Physics of Life grant [EP/W024020/1].
References
[1] Williams, P. L., & Beer, R. D. (2010). Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515. https://doi.org/10.48550/arXiv.1004.2515
[2] Delvenne, J.-C., Yaliraki, S. N., & Barahona, M. (2008). Stability of graph communities across time scales. arXiv preprint arXiv:0812.1811. https://doi.org/10.48550/arXiv.0812.1811
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P328: Modelling the impacts of Alzheimer’s Disease and Aging on Self-Location and Spatial Memory
Tuesday July 8, 2025 17:00 - 19:00 CEST
P328 Modelling the impacts of Alzheimer’s Disease and Aging on Self-Location and Spatial Memory

Aleksei Zabolotnii*1, Chrsitian F. Doeller1,2,3, Andrej Bicanski1,3

1Department of Psychology, Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
2Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
3Wilhelm Wundt Institute for Psychology, Leipzig University, Germany

*Email: zabolotnii@cbs.mpg.de
Introduction

Spatial navigation relies on the precise coordination of multiple neural circuits, particularly in the entorhinal cortex (EC) and hippocampus (HPC). Grid cells in the EC play a critical role in path integration, while place cells in the HPC encode specific locations. Dysfunction in these systems is increasingly linked to cognitive decline in aging and Alzheimer’s disease (AD)1. Early AD is characterized by EC dysfunction, including impaired neuronal activity and deficits in spatial navigation, even before neurodegeneration becomes evident2. Similarly, cognitive decline comes with aging and affects navigational computations3. Here we investigate both kinds of deficits in a mechanistic systems-level model of spatial memory.

Methods
We extend the BB-model of spatial cognition4 with a biologically plausible variant of continuous attractor network (CAN) model of grid cells5 and investigate the effect of perturbations on grid cells and the wider spatial memory system. Specifically, we investigate the stability against synaptic weight variability, and neuronal loss, the former (to a first approximation) more akin to age-related neural degradation, and the latter mimicking AD-associated neurodegeneration. To quantify the impact of these perturbations, we analyzed the propagation of degraded spatial representations to downstream hippocampal and extra-hippocampal circuits and evaluate changes in the accuracy of self-location decoding from grid cells.
Results
We demonstrate that our biologically plausible grid cell model can cope with neural loss and changes in synaptic weights, both of which lead to distortions of the activity pattern on the grid cell sheet. Positional decoding degrades gracefully. We also observe the propagation of distorted spatial representations to downstream areas during the imagery-associated mode of the BB-model, as well as deficits in object-location memory.
Discussion
Our model demonstrates for the first time in a mechanistic model how neural degenerative processes affect spatial accuracy. As damaged EC populations produce distorted activity, it causes imprecise firing of place cells as well as leads to forming distorted memories for locations of novel objects in the environment. Due to changes in the CAN, population activity vectors are unable to provide a correct and unique code for every location in space compared to those in the healthy system, linking our model to the spatial behavior of AD patients and aging adults.



Acknowledgements
Aleksei Zabolotnii acknowledges the DoellerLab and the Neural Computation Group. Andrej Bicanski and Christian F. Doeller acknowledge funding from the Max Planck Society
References
1. https://doi.org/10.1126/science.aac8128
2. https://doi.org/10.1016/j.cub.2023.09.047
3. https://doi.org/10.1016/j.neuron.2017.06.037
4. https://doi.org/10.7554/eLife.33752
5. https://doi.org/10.1371/journal.pcbi.1000291
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P329: The effect of overfitting on spatial perception and flight trajectories in pigeons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P329 The effect of overfitting on spatial perception and flight trajectories in pigeons

Margarita Zaleshina*1, Alexander Zaleshin2

1Moscow Institute of Physics and Technology, Moscow, Russia
2Institute of Higher Nervous Activity and Neurophysiology, Moscow, Russia

*Email: zaleshina@gmail.com

Introduction

Problems of overfitting in trained systems concern not only artificial neural networks, but also living organisms and humans. Pre-trained templates can reduce processing time, but increase errors in real dynamic situation. Conventional models often use not new, but current templates with distortion, addition, prolongation. Due to overfitting data can be misinterpreted, relevant data can be filtered out [1].

In our work we study overfitting in pigeon flights. These birds often use accumulated knowledge and route-finding algorithms (guided by beacons, long roads, loft-like buildings) [2]. EEG activities in a familiar situation differ from brain activity in new conditions, it can be observed with Neurologgers and GPS trackers [3].
Methods
We compared GPS tracks and brain activity of untrained and trained pigeons flying over landscapes with different information loads: near sea coast, over rural or urbanized areas. Source materials were selected from the Dryad Digital Repository and Movebank Data Repository.
We calculated brain frequencies and their changes; standard deviation from average flight path; frequency of surveying (loops in trajectories); percentage of detectable "points of interest" (Fig. 1).
Spatial analysis of GPS tracks, detection of landscape boundaries and detection of special points were performed using the QGIS.
To identify overfitting, we computed a decrease in the flexibility of individual flights and a decrease in the power of high-frequency EEG.
Results
Brain activity was most pronounced near the loft and least pronounced when pigeons flew along known routes along homogeneous terrains or extended objects. Additionally, high brain activity and surveying were demonstrated by pigeons when examining points of interest or when moving from one type of landscape to another, even by trained pigeons.
Trained pigeons more often preferred to fly along known track, even if it differed from the shortest route. In overfitting flights, surveying and standard deviations from the average flight track and changes in flight direction were minimal. Overfitting flights were often observed over rural terrain, less often in the coastal zone. In flocks the frequency of overfitting cases increased.
Discussion
Importance of overfitting is especially significant in modern conditions of accelerated emergence and use of "big" digital data. Excessive templates and strict filters can often lead to errors or to significantly limit the variability. Usage of multilayer data sources allows to accommodate and vary different planes of view, or context basic points, which helps reduce overfitting.
Studying of flight pigeons paths demonstrate relationship between external environment, chosen behavior and internal settings of trained birds. Surveying increases an ability to navigate in dynamical cases or to find interesting locations.

In future we plan to continue studying of surveying and multilayer data exchange to reduce the overfitting problem.




Figure 1. Typical cases of pigeon flight and pigeon EEG-power: trained pigeon, trained pigeon, pigeon near the point of interest, pigeon after overfitting
Acknowledgements
-
References
1. Zaleshina, M. & Zaleshin, A. (2024). Spatial Learning and Overfitting in Visual Recognition and Route Planning Tasks. IJCCI & NCTA. 1: 576-583.
2. Blaser, N. et al. (2013). Testing Cognitive Navigation in Unknown Territories: Homing Pigeons Choose Different Targets. Journal of Experimental Biology. 216(16):3123–31.
3. Ide, K. & Takahashi, S. (2022). A Review of Neurologgers for Extracellular Recording of Neuronal Activity in the Brain of Freely Behaving Wild Animals. Micromachines.13(9):1529.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P330: Quantitative Analysis of Artificial Intelligence Integration in Neuroscience
Tuesday July 8, 2025 17:00 - 19:00 CEST
P330 Quantitative Analysis of Artificial Intelligence Integration in Neuroscience

1Cate School, 1960 Cate Mesa Road, Carpinteria, CA, USA
2Department of Computer Science, Missouri State University, Springfield, MO, USA

*Email: trojancz@hotmail.com

Introduction

This study aimed to quantitatively assess the integration of artificial intelligence (AI) into neuroscience. By analyzing ~50,000 sample papers from the OpenAlex database[1], this study captured the breadth of AI applications across diverse disciplines of neuroscience and gauged emerging trends in research.

Methods
A dual-query strategy was applied. One query targeted neuroscience papers (2001-2022) mentioning AI‐related terms (Figure 1), while a control query used “neuroscience.” An automated classification pipeline, built on a prompted GPT‑4o model[2], dynamically processed titles and abstracts, and classified the papers into 6 categories: Behavioral Neuroscience, Cognitive Neuroscience, Computational Neuroscience, Neuroimaging, Neuroinformatics, and Unrelated to Neuroscience. Following classification, papers were aggregated by publication year and normalized via three strategies: division by totals in each discipline, division by annual OpenAlex counts, and a combined normalization method of the above two. See Figure 1 for the workflow chart.
Results
Analysis revealed a dramatic surge from 2015 to 2022 in Computational Neuroscience (12% increase per year), Neuroinformatics (18 % increase per year), and Neuroimaging (10% increase per year), whereas Cognitive and Behavioral Neuroscience displayed a plateau from 2013 to 2022 with slight declines afterward (Figure 1).
Discussion
Findings underscore the heterogeneous integration of AI across neuroscience disciplines, suggesting distinct developmental trajectories and new avenues for interdisciplinary research. The surge in AI applications post-2015 appears driven by advances in computational power, algorithmic innovations, and data availability, accelerating research in Computational Neuroscience, Neuroinformatics, and Neuroimaging[3]. Conversely, the plateau in Cognitive and Behavioral Neuroscience after 2013 may reflect shifting priorities or methodological challenges. These results can guide future studies to target underexplored intersections and inform strategic investments in emerging fields.




Figure 1. Data processing and analysis workflow (left); Number of Publication per year (top right); Yealy number of publication normalized by total publications (2001-2022) of each corresponding category.
Acknowledgements
We gratefully acknowledge the resources provided by OpenAlex and OpenAI. Their platforms enabled the data acquisition and automated classification essential to this bibliometric study.
References
[1] OpenAlex. (n.d.). OpenAlex: A comprehensive scholarly database. Retrieved from https://openalex.org
[2] OpenAI. (2024, May 13). GPT‑4o API [Large language model]. Retrieved from https://openai.com/api
[3] Tekin, U., & Dener, M. (2025). A bibliometric analysis of studies on artificial intelligence in neuroscience.Frontiers in Neurology,16:1474484.https://doi.org/10.3389/fneur.2025.1474484
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P331: Vitrual Brain Inference(VBI): A Toolkit for Probabilistic Inference in Virtual Brain Models
Tuesday July 8, 2025 17:00 - 19:00 CEST
P331 Vitrual Brain Inference(VBI): A Toolkit for Probabilistic Inference in Virtual Brain Models

Abolfazl Ziaeemehr*¹, Marmaduke Woodman¹, Lia Domide², Spase Petkoski¹, Viktor Jirsa¹, Meysam Hashemi¹

¹ Aix Marseille Univ, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
² Codemart, Cluj-Napoca, Romania*Email: abolfazl.ziaee-mehr@univmail.com


IntroductionUnderstanding brain dynamics requires accurate models that integrate neural activity and neuroimaging data. Virtual brain modeling has emerged as a powerful approach to simulate brain signals based on neurobiological mechanisms. However, solving the inverse problem of inferring brain dynamics from observed neuroimaging data remains a challenge. The Virtual Brain Inference (VBI) [1] toolkit addresses this need by offering a probabilistic framework for parameter estimation in large-scale brain models. VBI combines neural mass modeling with simulation-based inference (SBI) [2] to efficiently estimate generative model parameters and uncover underlying neurophysiological mechanisms.MethodsVBI integrates structural and functional neuroimaging data to build personalized virtual brain models. The toolkit supports various neural mass models, including Wilson-Cowan, Montbrió, Jansen-Rit, Stuart-Landau, Wong-Wang, and Epileptor. Using GPU-accelerated simulations, VBI extracts key statistical features such as functional connectivity (FC), functional connectivity dynamics (FCD), and power spectral density (PSD). Deep neural density estimators, such as Masked Autoregressive Flows (MAFs) and Neural Spline Flows (NSFs), are trained to approximate posterior distributions. This SBI approach allows efficient inference of neural parameters without reliance on traditional sampling-based methods.
ResultsWe demonstrate VBI’s capability by applying it to simulated and real neuroimaging datasets. The probabilistic inference framework accurately reconstructs neural parameters and identifies inter-individual variability in brain dynamics. Compared to traditional methods like Markov Chain Monte Carlo (MCMC) [3] and Approximate Bayesian Computation (ABC), VBI achieves superior scalability and efficiency. Performance evaluations highlight its robustness across different brain models and noise conditions. The ability to generate personalized inferences makes VBI a valuable tool for both research and clinical applications [4], aiding in the study of neurological disorders and cognitive function. Look at Fig.1 for the workflow.
DiscussionVBI provides an efficient and scalable solution for inferring neural parameters from brain signals, addressing a critical gap in computational neuroscience. By leveraging SBI and deep learning, VBI enhances the interpretability and applicability of virtual brain models. This open-source toolkit offers researchers a flexible platform for modeling, simulation, and inference, fostering advancements in neuroscience and neuroimaging research.




Figure 1. Overview of the VBI workflow: (A) A personalized connectome is constructed using diffusion tensor imaging and a brain parcellation atlas. (B) This serves as the foundation for building a virtual brain model, with control parameters sampled from a prior distribution. (C) VBI simulates time series data corresponding to neuroimaging recordings. (D) Summary statistics, including functional connectivit
Acknowledgements
This research was funded by the EU’s Horizon 2020 Programme under Grant Agreements No. 101147319 (EBRAINS 2.0), No. 101137289 (Virtual Brain Twin), No. 101057429 (environMENTAL), and ANR grant ANR-22-PESN-0012 (France 2030). We acknowledge Fenix Infrastructure resources, partially funded by the EU’s Horizon 2020 through the ICEI project (Grant No. 800858).

References
1.https://doi.org/10.1101/2025.01.21.633922
2. https://doi.org/10.1073/pnas.1912789117
3.https://doi.org/10.3150/16-BEJ810
4.https://10.1088/2632-2153/ad6230



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P332: Relating Input Resistance and Sodium Conductance
Tuesday July 8, 2025 17:00 - 19:00 CEST
P332 Relating Input Resistance and Sodium Conductance

Laura Zittlow*1, Erin Munro Krull1, Lucas Swanson1
1Mathematical Sciences, Ripon College, Ripon, WI, US*E-mail: laurazittlow@gmail.com
Introduction

The sodium conductance density (gNa) determines an axon’s ability to propagate APs. APs do not propagate if the gNa is too low, while they easily do if the gNa is high. Therefore, there is a sodium conductance density threshold (gNaT) [1]. Preliminary results suggest the gNaT for axons with simple morphologies linearly predicts gNaT for axons with more complex morphologies [2, 3]. To address axons with very complex morphologies, we decided to compare gNaT to input resistance (Rin). Rin, defined as the steady-state voltage to injected current ratio, inherently accounts for the axon’s morphology and electrical properties [4].

Methods
We use NEURON simulations [5] to model Rin and AP propagation from an axon collateral to the end of the main axon. We varied the morphology of an extra side branch to see the effect of axon morphology on Rin and gNaT. For each simulation, we find the Rin and gNaT. We evaluate the impact of location for lengths of 0-6𝜆, several side branch morphologies and lengths from 0-6𝜆, and the location and length of sub-branches.
Results
Our simulations show a 1-1 correspondence between Rin and gNaT under specific morphological changes, modeled as a smooth function. Branch location and length affect Rin and gNaT inversely, with their effects stabilizing as the distances and lengths increase. However, when a short side branch connects at the same point as the simulated branch, an abnormality–“bouncing”–occurs. Because shorter side branches are easier to stimulate, the AP can temporarily move into the said branch and thenbounceout. If only one variable (distance or morphology) changes, the error difference is 10-4in gNaT for a given Rin. However, if “bouncing” occurs, then the error difference is on the scale of 10-2.
Discussion
Our results indicate Rinand gNaTrespond monotonically to changes in axonal morphology unless “bouncing” occurs. This suggests Rincould serve as an alternative measure for axonal morphology when predicting gNaT. It offers a computationally efficient method for estimating gNaT. However, “bouncing” disrupts the smooth relationship between Rinand gNaTby making AP propagation more likely. Moving forward, we aim to compare Rin across more complex morphologies. Additionally, we plan to curve-fit the Rin-gNaT relationship and test it against the linear estimation method and realistic axonal morphologies.




Acknowledgements
Thank you to my mentor Dr. Erin Munro Krull and the rest of the Ripon College Mathematical Sciences department for the advice and guidance. Also, thank you to Ripon College's Summer Opportunities for Advanced Research (SOAR) program and the many donors who help fund the program.
References
[1]https://doi.org/10.1152/jn.00933.2011
[2]https://doi.org/10.1186/s12868-018-0467-3
[3]https://doi.org/10.1186/s12868-018-0467-3
[4] Carnevale, N. T., & Hines, M. L. (2006).The NEURON book.Cambridge University Press.
[5] Tuckwell, H. C. (1988).Introduction to theoretical neurobiology: Volume 1. Linear cable theory and dendritic structure.Cambridge University Press.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P333: Synaptic transmission during ischemia and recovery: a biophysical model including the complete glutamate-glutamine cycle
Tuesday July 8, 2025 17:00 - 19:00 CEST
P333 Synaptic transmission during ischemia and recovery: a biophysical model including the complete glutamate-glutamine cycle

Hannah van Susteren1, Christine R. Rose2, Hil G.E. Meijer1,Michel J.A.M. van Putten3,4

1Department of Applied Mathematics, University of Twente, Enschede, the Netherlands
2Institute of Neurobiology, Heinrich Heine University, Düsseldorf, Germany
3Clinical Neurophysiology group, department of Science and Technology,University of Twente, Enschede, the Netherlands
4MedischSpectrum Twente, Enschede, the Netherlands

Email:h.vansusteren@utwente.nl
Introduction

Cerebral ischemia is a condition in which blood flow and oxygen supply are restricted. Consequences range from synaptic transmission failure to (ir)reversible neuronal damage [1,2]. However, theinterplay of all the different effects of ischemia on synaptic transmission remains unknown. Excitatory synaptictransmission relies on the energy-dependent glutamate-glutamine (GG) cycle, which enables glutamate recycling via the astrocyte.We have constructed a detailed biophysical model that includes the first implementation of the complete GG cycle. Our model enables us to investigate the malfunction of synaptic transmission during ischemia and during recovery.

Methods
We extend the model in [3] and consider a presynaptic neuron and astrocyte in a finite extracellular space (ECS), surrounded by an oxygen bath as a proxy for energy supply (Fig. 1A). We consider sodium, potassium, chloride and calcium ion fluxes with corresponding channels and transporters such as the sodium-potassium ATPase. To model synaptic transmission, we combine calcium-dependent glutamate release with uptake by the excitatory amino acid transporter and the GG cycle. This cycle includes glutamine synthesis, glutamine transport and glutamate synthesis. We simulate ischemia by lowering the oxygen concentration in the bath. Furthermore, we simulate candidate recovery mechanisms involved in the recovery of physiological dynamics.
Results
We simulate severe ischemia by blocking energy supply for five minutes. In this scenario, the neuron enters a depolarization block (Fig. 1B). Repeated glutamate release and changes in ion concentrations result in toxic levels of glutamate in the ECS (Fig. 1C). The GG cycle is impaired due to malfunction of energy-dependent glutamine synthesis. Once energy supply is restored, the neuron remains depolarized and synaptic transmission disrupted. A candidate recovery mechanism is the blockade of the neuronal transient sodium channel. As a result, ion gradients recover, and glutamate clearance is restored. Electrical stimulation generates action potentials and physiological glutamate release, demonstrating full recovery of synaptic transmission.
Discussion
With our computational model that includes the first implementation of the GG cycle, we can simulate neuronal and astrocytic dynamics during ischemia and recovery. An important finding is that extreme glutamate accumulation is caused by ionic imbalances, and not only by excessive glutamate release. Furthermore, the GG cycle is disrupted due to impaired glutamine synthesis. In conclusion, our detailed model provides insight into the causes of excitatory synaptic transmission failure and suggestions for potential recovery mechanisms.





Figure 1. Figure 1: (A) Schematic overview of the model. (B) Membrane potentials and (C) extracellular glutamate during oxygen deprivation (grey area), sodium block (yellow area) and stimulation (dashed line).
Acknowledgements
This study was supported by the funds from the Deutsche Forschungsgemeinschaft (DFG), FOR2795 ‘Synapses under stress’.
References
1.https://doi.org/10.1016/j.neuropharm.2021.108557
2.https://doi.org/10.3389/fncel.2021.637784
3.https://doi.org/10.1371/journal.pcbi.1009019

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P334: Connectivity-based tau propagation and PET microglial activation in the Alzheimer’s disease spectrum
Tuesday July 8, 2025 17:00 - 19:00 CEST
P334 Connectivity-based tau propagation and PET microglial activation in the Alzheimer’s disease spectrum

Marco Öchsner1*, Matthias Brendel2,3, Nicolai Franzmeier4, Lena Trappmann⁵, Mirlind Zaganjori⁵, Ersin Ersoezlue⁵, Estrella Morenas-Rodriguez5,6, Selim Guersel5,6, Lena Burow⁵, Carolin Kurz⁵, Jan Haeckert5,8, Maia Tatò⁵, Julia Utecht⁵, Boris Papazov⁹, Oliver Pogarell⁵, Daniel Janowitz⁴, Katharina Buerger4,6, Michael Ewers⁴, Carla Palleis3,6,10, Endy Weidinger¹⁰, Gloria Biechele2, Sebastian Schuster², Anika Finze², Florian Eckenweber², Rainer Rupprecht¹¹, Axel Rominger2,12, Oliver Goldhardt13, Timo Grimmer13, Daniel Keeser1,5,9, Sophia Stoecklein⁹, Olaf Dietrich⁹, Peter Bartenstein2,3, Johannes Levin3,6,10, Günter Höglinger6,14, Robert Perneczky1,3,5,6,15,16and Boris-Stephan Rauchmann1,5,6,16



Department of Neuroradiology, LMU University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Nuclear Medicine, University Hospital, Ludwig Maximilian University of Munich, Germany

Munich Cluster for Systems Neurology, Munich, Germany

Institute for Stroke and Dementia Research, University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Psychiatry and Psychotherapy, LMU University Hospital, Ludwig Maximilian University of Munich, Germany

German Center for Neurodegenerative Diseases, Munich, Germany

Biomedical Center, Faculty of Medicine, Ludwig Maximilian University of Munich, Germany

Department of Psychiatry, Psychotherapy, and Psychosomatics, University of Augsburg, Germany

Department of Radiology, LMU University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Neurology, University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Psychiatry and Psychotherapy, University of Regensburg, Germany

Department of Nuclear Medicine, University of Bern, Inselspital, Bern, Switzerland

Department of Psychiatry and Psychotherapy, Rechts der Isar Hospital, Technical University of Munich, Germany

Department of Neurology, Hannover Medical School, Germany

Ageing Epidemiology Research Unit, School of Public Health, Imperial College London, United Kingdom

Sheffield Institute for Translational Neuroscience, University of Sheffield, Sheffield


* Email: marco.oechsner@med.lmu.de



Introduction
Microglial activation is increasingly recognized as central to Alzheimer's disease spectrum (ADS) progression, potentially influencing or responding to pathological tau accumulation [1]. Recent evidence suggests microglial activation and tau pathology spread along highly interconnected brain regions, implying connectivity-driven propagation mechanisms [2]. Yet, the impact of changes in microglial activation for Tau accumulation remain unclear. We aimed to determine: (a) longitudinal differences in microglial activation between ADS and healthy controls (HC), (b) relationships between microglial activation changes and tau accumulation, and (c) how these changes affect functional connectivity based relationships between Tau and microglial activation.
Methods
As part of the longitudinal ActiGliA prospective cohort study [3], [18F]GE-180 TSPO (microglia) PET, [18F]Flutemetamol (Tau) PET, resting-state fMRI, and structural MRI in ADS (n=36; defined by CSF Aβ42/Aβ40 ratio or an Aβ PET composite of ADS) and HC (n=20; with CDR=0 and no Aβ pathology) at baseline and 18-month follow-up (n=6 each). PET imaging was intensity-normalized to cerebellar gray matter, and SUVR values extracted based on the Schaefer200 parcellation. fMRI preprocessing (fMRIPrep v1.2.1) was used to derive atlas-based, r-to-z-transformed functional connectivity matrices, after filtering, smoothing, and confound regression. Group comparisons and correlations utilized Cohen’s d, Mann-Whitney U, linear regression and Spearman’s ρ.
Results
Baseline TSPO was lower (d=-1.05, p<0.01) and tau higher (d=1.69, p<0.01) in ADS vs. HC. TSPO strongly correlated with tau levels in both groups (ADS:ρ=0.69, HC:ρ=0.86, p<0.01). Over 18 months, TSPO SUVRs increased significantly more in ADS compared to HC (d=2.61, p<0.01). Increased TSPO ratios (β=-1.4, ρ=-0.14, p=0.03), and ADS-HC TSPO ratio difference (β=-1.43, ρ=-0.25, p<0.01) correlated negatively with tau levels in ADS, while in HC only with HC-ADS ratio differences (β=0.47, p<0.01, ρ=0.13). In ADS, high TSPO-change regions showed significant negative connectivity correlations with tau (β=-2.36, ρ=-0.50, p<0.01), while high Tau regions showed only a weak connectivity based association with TSPO ratios, relationships absent in HC.
Discussion
Our findings indicate a longitudinal increase in microglial activation in ADS, despite initially lower activation compared to HC. Higher baseline microglial activation correlated with tau accumulation, particularly in regions differentiating ADS from HC. However, tau levels negatively correlated with longitudinal TSPO changes, suggesting limited further microglial activation in regions already exhibiting elevated baseline activation. Although TSPO ratio changes varied across individuals, group-level connectivity relationships between regions with high TSPO changes and tau support a connectivity-mediated propagation of tau pathology modulated by microglial activation.



Acknowledgements
This study was supported by the German Center for Neurodegenerative Disorders (Deutsches Zentrum für Neurodegenerative Erkrankungen), Hirnliga (Manfred-Strohscheer Stiftung), and the German Research Foundation (Deutsche Forschungsgemeinschaft) under Germany's Excellence Strategy within the framework of the Munich Cluster for Systems Neurology (EXC 2145 SyNergy, ID 390857198).
References
● Fan, Z., Brooks, D. J., Okello, A., & Edison, P. (2017). An early and late peak in microglial activation in Alzheimer's disease trajectory. Brain, 140(3), 792–803.
● Pascoal, T. A., Benedet, A. L., Ashton, N. J., et al. (2021). Microglial activation and tau propagate jointly across Braak stages. Nature Medicine, 27(9), 1592–1599.
● Rauchmann, B.-S., Brendel, M., Franzmeier, N., et al. (2022). Microglial activation and connectivity in Alzheimer disease and aging. Annals of Neurology, 92(5), 768–781.


Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

20:10 CEST

Party
Tuesday July 8, 2025 20:10 - 22:00 CEST
Tuesday July 8, 2025 20:10 - 22:00 CEST
TBA
 
Wednesday, July 9
 

08:30 CEST

Registration
Wednesday July 9, 2025 08:30 - 19:00 CEST
Wednesday July 9, 2025 08:30 - 19:00 CEST

09:00 CEST

09:00 CEST

BRAIN 2.0: Emerging Research Topics in NeuroAI
Wednesday July 9, 2025 09:00 - 17:30 CEST
One of the goals of the NIH BRAIN (Brain Research Through Advancing Innovative Neurotechnologies) Initiative is to develop theories, models, and methods to understand brain functions and their causal links to behaviors. The modern advances in neuroscience and AI have generated growing interests in NeuroAI, as witnessed by the feedback from the recent NIH BRAIN NeuroAI workshop (Nov. 12-13, 2024). Briefly, NeuroAI is aimed to, first, use AI to understand and improve the brain and behaviors, and second, to develop brain-inspired AI systems for robust, faster and more efficient operations and performances. Motivated by the new wave and developments in NeuroAI, this workshop invites leading experts and new investigators from various research backgrounds to discuss many emerging research topics. The goal of this full-day workshop, in the name of BRAIN 2.0 (BRidging AI and Neuroscience), is to focus on building the bridge between AI and neuroscience, to discuss new research directions and outstanding questions, and to foster team collaborations and open science. Research topics of interest include but not limited to neural transformers and foundation models, new neural network architectures, distributional or meta reinforcement learning, structural reasoning and inference, large language models (LLMs), and digital twins brain. The format of the workshop will consist of both overview-like and research-oriented lecture presentations as well as panel discussions.
Speakers
avatar for Zhe Sage Chen

Zhe Sage Chen

New York University
Wednesday July 9, 2025 09:00 - 17:30 CEST
Onice Room

09:00 CEST

Brain Digital Twins: from Multiscale Modeling to Precision Medicine
Wednesday July 9, 2025 09:00 - 17:30 CEST
This workshop will explore how brain digital twins are revolutionizing research into pathological brain conditions and transforming the landscape of precision medicine. Participants will learn how these models work and how they integrate data and tools from different fields, such as molecular neuroscience, network theory and dynamical systems. We will discuss how digital twins can help identify early biomarkers able to characterize pathological states and predict disease progression. Another key topic will be the use of digital twins as in silico environments for testing potential treatments before applying them in clinical scenarios.
Through real-world examples and interactive sessions, we will tackle some of the challenges that come with this innovative approach, such as achieving anatomical precision, handling large datasets, and ensuring ethical use in patient care. The focus will remain on making these cutting-edge tools accessible and impactful, not just for researchers but also for clinicians aiming to deliver more effective, tailored care to their patients.
Speakers
avatar for Lorenzo Gaetano Amato

Lorenzo Gaetano Amato

PhD Student, Sant'Anna School of Advanced Study
Wednesday July 9, 2025 09:00 - 17:30 CEST
Belvedere room

09:00 CEST

Brains and AI
Wednesday July 9, 2025 09:00 - 17:30 CEST
Full workshop program

Schedule
9:00-9:30 Fleur Zeldenrust
Heterogeneity, non-linearity and dimensionality: how neuron and network properties shape computation
9:30-10:00 Vassilis Cutsuridis
Synapse strengthening in bistratified cells leads to super memory retrieval in the hippocampus
10:00-10:30 Spyridon Chavlis
Dendrites as nature's blueprint for a more efficient AI
10:30-11:00 Coffee Break
11:00-11:30 Andreas Tolias

Foundation models and digital twins of the brain (online)
11:30-12:00 Robert Legenstein
Spatio-Temporal Processing with Dynamics-enhanced Spiking Neural Networks
12:00-12:30 Max Garagnani
Concept superposition and learning in standard and brain-constrained deep neural networks
12:30-14:00 Lunch
14:00-14:30 Martin Trefzer

Motifs, Modules, and Mutations: Building Brain-like Networks
14:30-15:00 Julian Göltz
From biology to silicon substrates: neural computation with physics
15:00-15:30 Maxim Bazhenov
Do Neural Networks Dream of Electric Sheep?
15:30-16:00  Coffee Break
16:00-16:30 Dhireesha Kudithipudi
Temporal Chunking Enhances Recognition of Implicit Sequential Patterns
16:30-17:00 Thomas Nowotny
Auto-adjoint method for gradient descent in spiking neural networks
17:00-18:00 Questions and Debate
18:00 End of Workshop

Full workshop program
Speakers
VC

Vassilis Cutsuridis

Associate Professor, University of Plymouth
Wednesday July 9, 2025 09:00 - 17:30 CEST
Room 4

09:00 CEST

Modeling extracellular potentials: principles, methods, and applications
Wednesday July 9, 2025 09:00 - 17:30 CEST
Please visit the dedicated website for full details: https://nicolomeneghetti.github.io/ECP_CNS2025_Wshop/

Simulating large-scale neural activity is essential for understanding brain dynamics and linking in silico models to experimentally measurable signals like LFP, EEG, and MEG. These simulations, ranging from detailed biophysical models to simplified proxies, bridge microscale neural dynamics with meso- and macro-scale recordings, offering powerful tools to interpret data, refine analyses, and explore brain function. Recent advances have demonstrated the clinical and theoretical value of such models, shedding light on oscillations, excitation-inhibition balance, and biomarkers of neurological disorders like epilepsy, Alzheimer's, and Parkinson’s disease. This workshop will cover the latest methodologies, hybrid modeling approaches, and applications of brain signal simulations.By gathering experts across disciplines, it aims to foster collaboration and advance our understanding of brain function and dysfunction.


09:15 – 09:50
Dominik Peter Koller, Berlin Institute of Health (BIH) at Charité – Universitätsmedizin Berlin, Berlin, Germany
Title: "How structural connectivity directs cortical traveling waves and shapes frequency gradients"

09:50 – 10:25
Gaute T. Einevoll, Department of Physics, University of Oslo, Oslo, Norway
Title: "Modeling electric brain signals and stimulation"

10:30 – 11:00
Coffee Break

11:00 – 11:35
Johanna Senk, Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
Title: "Large-scale modeling of mesoscopic networks at single-neuron resolution"

11:35 – 12:10
Pablo Martínez Cañada, Research Centre for Information and Communications Technologies (CITIC), University of Granada, Granada
Title: "Inverse Modelling of Field Potentials from Simulations of Spiking Network Models: Applications in Neuroscience Research and Clinical Settings"

12:10 – 12:40
Nicolò Meneghetti, The Biorobotics Institute, Sant’Anna School of Advanced Studies, Pisa, Italy
Title: "From microcircuits to mesoscopic signals: a kernel approach to efficient and interpretable LFP estimation"

12:45 - 14.00
Lunch Break

14:15 – 14:50
Emily Patricia Stephen, Department of Math and Statistics, Boston University, Boston, MA, United States of America
Title: "Connecting biophysical models to empirical power spectra using Filtered Point Processes"

14:50 - 15:25
Madeleine Lowery, School of Electrical and Electronic Engineering, University College Dublin, Dublin, Ireland
Title: "Modelling Neural Activity During Adaptive Deep Brain Stimulation for Parkinson’s Disease"

15:30 - 16:00
Coffee Break 

16:00 – 16:35
Meysam Hashemi, Aix Marseille University INSERM, INS, Institute for Systems Neuroscience, Marseille, France
Title: "Principles and Operation of Virtual Brain Twins"

16:35 - 17:10
Katharina Duecker, Brown University and University of Birmingham,  USA/UK
Title: "The Human Neocortical Neurosolver as an interactive modeling tool to study the multi-scale mechanisms of human EEG/MEG signals"
Speakers
avatar for Nicolò Meneghetti

Nicolò Meneghetti

Post-doctoral fellow, The Biorobotics Institute, Scuola Superiore Sant'Anna Pisa
My name is Nicolò Meneghetti, and I am a postdoctoral fellow at the Computational Neuroengineering Laboratory of the Sant'Anna School of Advanced Studies (Pisa, Italy). My research focuses on computational models of visual processing, as well as the modeling and analysis of extracellular... Read More →
Wednesday July 9, 2025 09:00 - 17:30 CEST
Room 5

09:00 CEST

NEW VISTAS IN MULTISCALE BRAIN MODELLING AND APPLICATIONS
Wednesday July 9, 2025 09:00 - 17:30 CEST
Speakers
avatar for Rosanna Migliore

Rosanna Migliore

Researcher, Istituto di Biofisica - CNR
Computational NeuroscienceEBRAINS-Italy Research Infrastructure for Neuroscience    https://ebrains-italy.eu/
avatar for Paolo Massobrio

Paolo Massobrio

Associate Professor, Univeristy of Genova
My research activities are in the field of the neuroengineering and computational neuroscience, including both experimental and theoretical aspects. Currently, I am coordinating a research group (1 assistant professor, 2 post-docs, and 5 PhD students) working on the interplay between... Read More →
Wednesday July 9, 2025 09:00 - 17:30 CEST
Hall 3B

09:00 CEST

Theoretical and experimental approaches towards understanding brain state transitions
Wednesday July 9, 2025 09:00 - 17:30 CEST
Speakers
avatar for Andre Peterson

Andre Peterson

The University of Melbourne
Wednesday July 9, 2025 09:00 - 17:30 CEST
Room 6

09:00 CEST

Understanding the Computational Logic of Predictive Processing: A 25-year Perspective
Wednesday July 9, 2025 09:00 - 17:30 CEST
Aims and topic
Predictive processes are ubiquitous in the brain and thought to be critical for adaptive behaviours, such as rapid learning and generalisation of tasks and rules. Early works such as the computational vision model proposed by Rao and Ballard (1999) have inspired over two decades of theoretical, computational, and experimental research about predictive neural processing. Stemming from these early works, ongoing investigations provide a rich ecosystem of theory, experiments and computational models that expand beyond the notion of predictive coding. Further, thanks to rapidly developing neural recording technologies, large datasets at multiple scales of granularity and resolution are becoming increasingly available. New computational models enable us to gain a mechanistic understanding of how neural circuits learn to implement and deploy predictive computations. Yet, a full understanding of the underlying computational logic remains fleeting because different aspects are often studied in separate research programs (e.g., layer circuits vs whole-brain neuroimaging), with little cross-pollination. This symposium will look at predictive processing in the context of modern computational neuroscience. Speakers will discuss new theories extrapolating low-dimensional population activity, recent work exploring efficient coding in artificial neural networks and rats' visual cortex, coding hierarchies of prediction errors across brain areas, and computational modelling of behaviour and neural data across species (humans, monkeys, rodents), focusing on high-level, flexible behaviours (hierarchical reasoning, context changes, conceptual knowledge). The topic addressed in this symposium is central to multiple streams of research in computational neuroscience, e.g., perception, decision-making, motor control, and social behaviour. Our aspiration is to stimulate interaction among researchers working in different disciplines and highlight the open questions that will shape future research.

SpeakersMatthias Tsai --Bern University, Switzerland
Rohan Rao --Newcastle University / Oxford University, UK
Erin Rich --New York University, USA
Silvia Maggi --University of Nottingham, UK
Armin Lak --Oxford University, UK
Abhishek Banerjee --Oxford University / Queen Mary University of London, UK
Aurelio Cortese --ATR Institute International, Japan

Schedule
9.00 - 9.05: opening remarks
9.05 - 9.45: Erin Rich,
9.45 - 10.15: Rohan Rao,
10.15 - 10.30: coffee break
10.30 - 11.00: Matthias Tsai
11.00 - 11.45: Aurelio Cortese
11.45 - 12.00: discussion
12.00 - 14.00: Lunch
14.00 - 14.45: Abhishek Banerjee,
14.45 - 15.30: Silvia Maggi
15.30 - 15.45: coffee break
15.45 - 16.30: Armin Lak,
16.30 - 17.00: discussion
17.00 - 17.05: closing remarks
Speakers
avatar for Aurelio Cortese

Aurelio Cortese

Group leader, ATR Institute International
Aurelio is a group leader at the ATR Institute International in Kyoto, Japan. Aurelio's group is interested in understanding behavioural, computational and neural mechanisms of adaptive decision-making and learning, with an emphasis on metacognition and abstraction. In addition, the... Read More →
avatar for Abhishek Banerjee

Abhishek Banerjee

Professor of Neuroscience, Department of Pharmacology, University of Oxford and Blizard Institute, Queen Mary University of London
Abhi is a Professor of Neuroscience at Barts and Queen Mary University of London and a PI and Wellcome Investigator at the University of Oxford, UK. Abhi's lab is interested in studying neural circuit mechanisms underlying the flexibility of decision-making and how circuit dysfunctions... Read More →
Wednesday July 9, 2025 09:00 - 17:30 CEST
Room 9

10:30 CEST

Coffee break
Wednesday July 9, 2025 10:30 - 11:00 CEST
Wednesday July 9, 2025 10:30 - 11:00 CEST

10:40 CEST

12:30 CEST

Lunch break
Wednesday July 9, 2025 12:30 - 14:00 CEST
Wednesday July 9, 2025 12:30 - 14:00 CEST

14:00 CEST

14:00 CEST

15:30 CEST

Coffee break
Wednesday July 9, 2025 15:30 - 16:00 CEST
Wednesday July 9, 2025 15:30 - 16:00 CEST

16:00 CEST

 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.