Loading…
arrow_back View All Dates
Tuesday, July 8
 

08:30 CEST

Registration
Tuesday July 8, 2025 08:30 - 19:00 CEST
Tuesday July 8, 2025 08:30 - 19:00 CEST

09:00 CEST

09:00 CEST

Brain-Inspired Computing
Tuesday July 8, 2025 09:00 - 12:30 CEST
Brain-inspired computing looks to mimic how the human brain works to improve artificial intelligence (AI) systems. This area has gained a lot of interest recently because it helps us create stronger and more efficient AI models while tackling challenges faced by current artificial neural networks.

This workshop will cover a range of topics, including biological neural networks, cognitive computing, and biologically-inspired algorithms. We will discuss how learning from the brain's structure and operations can lead to new solutions for complex issues in AI, machine learning, and data processing.

The workshop will include talks from experts in the field and interactive panel discussions. Participants will have the chance to collaborate, share ideas, and connect with others who are excited about using biological principles to advance technology.

Full program in this link.

Schedule
9:00 AM - 9:30 AM Speaker: Rui Ponte Costa, University of Oxford
A theory of self-supervised learning in cortical layers
9:30 AM - 10:00 AM Speaker: Guillaume Bellec, Vienna University of Technology
Validating biological mechanisms in deep brain models with optogenetic perturbation testing
10:00 AM - 10:30 AM Speaker: Guozhang Chen, Peking University
Characteristic differences between computationally relevant features of cortical microcircuits and artificial neural networks
10:30 AM - 11:00 AM Coffee Break
11:00 AM - 11:30 AM Speaker: Robert Legenstein, Graz University of Technology
Rapid learning with phase-change memory-based neuromorphic hardware through learning-to-learn
11:30 AM - 12:00 PM Speaker: Shogo Ohmae, Chinese Institute for Brain Researc
World-model-based versatile computations in the neocortex and the cerebellum
12:00 PM - 12:30 PM Speaker: Yuliang Zang, Tianjin University
Biological strategies for efficient learning in cerebellum-like circuits
12:30 End of Workshop and Lunch Break

Full program in this link.
Speakers
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 5

09:00 CEST

Enabling synaptic plasticity, structural plasticity, and mutil-scale modeling with morphologically detailed neurons using Arbor
Tuesday July 8, 2025 09:00 - 12:30 CEST
Current computational neuroscience studies are often limited to a single scale or simulator, with many still relying on standalone simulation code due to computational power and technology constraints. Simulations incorporating biophysical properties and neural morphology typically focus on single neurons or small networks, while large-scale neural network simulations often resort to point neurons as a compromise to incorporate plasticity and cell diversity. Whole-brain simulations, on the other hand, frequently sacrifice details at the individual neuron and network composition levels.
This workshop introduces recent advances leveraging the next-generation simulator Arbor, designed to overcome these challenges. Arbor enables seamless conversion from the widely used NEURON simulator, facilitates the study of functional and structural plasticity in large neural networks with detailed morphology, and supports multi-scale modeling through co-simulation, integrating microscopic and macroscopic levels of simulation.
Arbor is a library optimized for efficient, scalable neural simulations by utilizing both GPU and CPU resources. It supports the simulation of both individual neurons and large-scale networks while maintaining detailed biophysical properties and morphological complexity. The workshop will feature presentations covering key aspects:

Effortless Transition from NEURON to Arbor - Dr. Beatriz Herrera - Allen Brain Institute, USA
Introducing to the SONATA format, which simplifies the migration process and enables cross-simulator validation, ensuring a smooth transition to Arbor for researchers familiar with NEURON.

Structural Plasticity Simulations - Marvin Kaster & Prof. Felix Wolf - TU Darmstadt, Germany 
Presenting ReLEARN and Arbor’s capabilities in modeling distance-dependent structural plasticity, providing insights into structural changes.

Synaptic Plasticity -  Dr. Jannik Luboeinski - University of Göttingen, Germany
Showcasing Arbor’s capabilities in modeling calcium-based functional plasticity.

Multi-Scale Co-Simulation with TVB -  Prof. Thanos Manos - CY Cergy-Paris University, France
Demonstrating Arbor’s co-simulation with The Virtual Brain (TVB) platform, illustrating the study of epilepsy propagation as an example of multi-scale modeling.

The workshop will conclude with an interactive coding session, offering participants hands-on experience with Arbor and an opportunity to apply the presented concepts.
Speakers
avatar for Han Lu

Han Lu

postdoc, Forschungszentrum Jülich
Tuesday July 8, 2025 09:00 - 12:30 CEST
Belvedere room

09:00 CEST

Inference Methods for Neuronal Models: from Network Activity to Cognition
Tuesday July 8, 2025 09:00 - 12:30 CEST
The development of models for neuronal systems have matured in recent years and they exhibit increasing complexity thanks to computer resources for simulation. In parallel, the increasing availability of data poses the challenge to quantitatively related those models to data, going beyond reproducing qualitative activity patterns and behavior. Model inference is thus becoming an indispensable tool for unraveling the mechanisms underlying brain dynamics, behavior, and (dys)function. A critical aspect of this endeavor is the ability to infer changes across multiple scales, from neurotransmitters and synaptic interactions to neural circuits and whole-brain networks. Recent approaches that have been adopted by the neuroscience community include methods for directed effective connectivity (e.g. dynamical causal modeling), simulation-based inference on whole-brain models, and active inference for understanding perception, action and behavior. They have significantly enhanced our ability to interpret data by modeling underlying mechanisms and neuronal processes. This workshop will bring together experts from diverse fields to explore the state-of-the-art methodologies, taking specific applications as examples to compare them and highlight remaining challenges.

Speakers
avatar for Matthieu Gilson

Matthieu Gilson

chair of junior professor, Aix-Marseille University
avatar for Meysam Hahsemi

Meysam Hahsemi

Research Fellow
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 103

09:00 CEST

Mechanisms for Oscillatory Neural Synchrony
Tuesday July 8, 2025 09:00 - 12:30 CEST
https://www.medschool.lsuhsc.edu/cell_biology/cns_2025.aspx

CNS*2025 in Florence, Italy on July 08, 2025From 9:00 to 12:30This workshop will bring together researchers who have recently published on synchronization networks of coupled oscillators, with a mix of approaches but an emphasis on phase response curve (PRC) theory. The researchers come from both theoretical and experimental backgrounds. Topics include synchronization mechanisms for theta nested gamma in the medial entorhinal cortex, mean-field pulsatile coupling methods for fast oscillations in inhibitory networks, beta oscillations in parkinsonian basal ganglia, the relative contributions of synaptic and ultra-fast non-synaptic ephaptic coupling to the inhibition of cerebellar Purkinje cells by basket cells, infinitesimal macroscopic PRC (imPRC) within the exact mean-field theory applied to ING and PING, and robustness in a neuromechanical model of motor pattern generation.
Carmen Canavier,  LSU Health Sciences Center New Orleans:  “A Mean Field Theory for Pulse-Coupled Oscillators based on the Spike Time Response Curve”
Joshua A Goldberg, Hebrew University of Jerusalem:  “Empirical study of dendritic integration and entrainment of basal ganglia pacemakers using phase response curves”
Dimitri M Kullmann, University College London: “Basket to Purkinje Cell Inhibitory Ephaptic Coupling Is Abolished in Episodic Ataxia Type 1”
Hermann Rieke, Northwestern University: “Paradoxical phase response of gamma rhythms facilitates their entrainment in heterogeneous networks”
Yangyang Wang, Brandeis University: “Variational and phase response analysis for limit cycles with hard boundaries, with applications to neuromechanical control problems”
Brandon Williams, Boston University: “Fast spiking interneurons generate high frequency gamma oscillations in the medial entorhinal cortex”
Speakers
avatar for Carmen Canavier

Carmen Canavier

Mullins Professor and Department Head, LSU Health Sciences Center NO
Workshop on Mechanisms for Oscillatory Neural SynchronyCNS*2025 in Florence, Italy on July 09, 2025From 14:00 to 17:30This workshop will bring together researchers who have recently published on synchronization networks of coupled oscillators, with a mix of approaches but an emphasis... Read More →
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 6

09:00 CEST

Multiscale Modeling of Electromagnetic Field Perturbations on Neural Activity
Tuesday July 8, 2025 09:00 - 12:30 CEST
Speakers
AA

Alberto Arturo Vergani

Research Fellow, University of Pavia
Tuesday July 8, 2025 09:00 - 12:30 CEST
Hall 3B

09:00 CEST

Neuromodulation, sleep-dependent brain dynamics and information processing
Tuesday July 8, 2025 09:00 - 12:30 CEST
Tuesday July 8, 2025 09:00 - 12:30 CEST
Room 9

09:00 CEST

09:00 CEST

Cross-species modeling of brain structure and dynamics
Tuesday July 8, 2025 09:00 - 13:00 CEST
Speakers
JP

James Pang

Research Fellow, Monash University
Tuesday July 8, 2025 09:00 - 13:00 CEST
Auditorium

09:00 CEST

Advancing Mathematical Methods in Neuroscience Data Analysis
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
Brief Description: With the ever increasing amount of data acquired in neuroscience applications there is an essential need to develop computationally effective, robust, and interpretable data processing algorithms. Recent advancements in graph inference, topology, information theory and deep learning have shown promising results in analyzing biological/physiological data, as well as datasets acquired by intelligent agents. Combining elements from different disciplines of information theory, mathematics, and machine learning is paramount for developing the next generation of methods that will facilitate big data analysis under the realm of better understanding brain dynamics, as well as neuroinspired system dynamics in general. The goal of the workshop is to bring researchers working in data science, neuroscience, mathematics, and machine learning together to discuss challenges posed by analyzing multimodal data sets in neuroscience along with potential solutions, exchange ideas and present their latest work in designing and analyzing effective data processing algorithms. This workshop will serve as a great opportunity to discuss innovative future directions for neuroinspired processing of large amounts of data, while considering novel mathematical data models and computationally efficient learning algorithms.

Schedule:

9:00 - 9:40: Kathryn Hess, EPFL, Topological perspectives on the connectome
Abstract: 
Over the past decade or so, tools from algebraic topology have been shown to be very useful for the analysis and characterization of networks, in particular for exploring the relation of structure to function. I will describe some of these tools and illustrate their utility in neuroscience, primarily in the framework of a collaboration with the Blue Brain Project.

9:45 - 10:25: Moo Kyung Chung, University of Wisconsin, Topological Embedding of Dynamically Changing Brain Networks
Abstract:
We introduce a novel topological framework for embedding time-varying brain networks into a low-dimensional space. Our Topological Embedding captures the evolving structure of functional connectivity by mapping dynamic birth and death values of topological features (connected components and cycles) into a 2D plane. Unlike traditional analyses that rely on synchronized time-points or direct comparisons of network matrices, our method aligns the dynamic behavior of brain networks through their underlying topological features, thus offering invariance to temporal misalignments and inter-subject variability. Using resting-state functional magnetic resonance images (rs-fMRI), we demonstrate that the topological embedding reveals stable 0D homological structures and fluctuating 1D cycles across time, which are further analyzed in the frequency domain through the Fourier Transform. The resulting topological spectrograms exhibit strong associations with age and cognitive traits, including fluid intelligence. This study establishes a robust and interpretable topological representation for the analysis of dynamically changing brain networks, with broad applicability in neuroscience and neuroimaging-based biomarker discovery. The talk is based on arXiv:2502.05814

10:30 - 11:00: Coffee Break

11:00 - 11:40: Anna Korzeniewska, Johns Hopkins University, From causal interactions among neural networks to significance in imaging brain tumor metabolism.

Abstract: Neural activity propagates swiftly across brain networks, often not providing enough data-points to model its dynamics. This limitation can be overcome by using multiple realizations, or repetitions, of the same process. However, once repetitions have been consumed for modeling, or only one is available, the significance of the neural dynamics cannot be assessed using traditional statistical methods. We propose a new method for assessing statistical confidence using the variance of a smooth estimator and a criterion for the choice of a smooth ratio. We show their applications to event-related neural propagations among eloquent and epileptogenic networks, and to metabolite kinetics in hyperpolarized 13C MRI (hpMRI) of brain tumor. The event-related causality (ERC) method - a multichannel extension of the Granger causality concept – was applied to multi-channel EEG recordings to estimate the direction, intensity, and spectral content of direct causal interactions among brain networks. A two-dimensional (2D) moving average, with a rectangular smooth window, sliding over points in the time-frequency plane, provided the smooth estimator and its error for statistical testing. The smooth size of the 2D moving average was determined by the W-criterion, which combines the difference between the smooth estimator and the real values with the confidence interval. The same approach was applied to 2D images of hpMRI of pyruvate metabolism of malignant glioma. A newly developed bivariate smoothing model ensured precise embedding of ERC’s statistical significance in time-frequency space, revealing complex frequency-dependent dynamics of causal interactions. The strength and pattern of neural propagations among eloquent networks reflected stimulus modality, lexical status, and syllable position in a sequence, uncovering mechanisms of speech control and modulation. The strength and pattern of high-frequency interactions among epileptogenic networks identified seizure onset zones and unveiled propagations preceding seizure onset. Statistical confidence of the difference between metabolic responses of tumor and normal tissue, obtained through hpMRI, allowed tumor delineation. Moving average provides an efficient smooth estimator and its error (optimal for reducing random noise while retaining sharp step response) and ensures precise embedding of statistical significance in two-dimensional space. The new approach overcomes several limitations of previously used 2D spline interpolation (restraint to a mesh of knots introducing artifactual distributions of variance and significance, and failure to converge in some cases), while W-criterion provides efficient choice of smooth size. The new technique has broad applicability to neuroscientific research and clinical applications, including planning for epilepsy surgery, localizing anatomical targets for responsive neuromodulation, and gauging tumor treatment response.

11:45 - 12:25: Vasileios Maroulas, University of Tennessee Knoxville, The Shape of Uncertainty.

Abstract: How does the brain know where it is and where it is going? Deep within our neural circuits, specialized cells—like head direction and grid cells—fire in intricate patterns to guide spatial awareness and navigation. But decoding these patterns requires tools that can keep up with the brain’s complexity. In this talk, I will share how we wre using topological deep learning to do just that. Our new models tap into higher-dimensional structures to predict direction and position—without relying on hand-crafted similarity measures. But that is just the beginning. I will also introduce a Bayesian framework for learning on graphs using sheaf theory, where uncertainty is not a bug but a feature. By placing probability distributions on the rotation group and learning them through the network, we gain robustness, flexibility, and accuracy—especially when data is scarce. Together, these advances point to a bold new direction: using geometry and topology to unlock the brain’s code and reshape how we learn from complex data.





Speakers
VM

Vasileios Maroulas

Professor of Mathematics, University of Tennessee Knoxville
topological machine learning, Bayesian computational statistics, manifold learning
DB

Dave Boothe

Neuroscientist, Army Research Laboratory
IS

Ioannis Schizas

Research Engineer, Army Research Lab
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
Hall 2B

09:00 CEST

Linking structure, dynamics, and function in neuronal networks: old challenges and new directions
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
The full program including abstracts can be found here

https://sites.google.com/view/cns2025workshop-strudynfun/

We are looking forward to seeing you at the workshop.

Wilhelm Braun, Kayson Fakhar and Claus C. Hilgetag
Speakers
avatar for Wilhelm Braun

Wilhelm Braun

Junior Research Group leader, CAU Kiel, Department of Electrical and Information Engineering
CC

Claus C Hilgetag

Professor, University Medical Center Eppendorf Hamburg, Germany
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 12:30 CEST
Room 101

09:00 CEST

Computational strategies in epilepsy modelling and seizure control
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Epilepsy remains a complex neurological condition, necessitating innovative approaches to understanding
and mitigating seizure activity. This workshop is designed to bring together computational neuroscientists
and researchers with experimental and clinical background to explore cutting-edge strategies in epilepsy
modeling and seizure control. For the general content structure, we plan to start from a modeler's
perspective and then progressively move towards more data-driven approaches.

The first session will explore seizure mechanisms through biophysical and neural mass models at different
temporal and spatial scales, investigating, among others, ionic dynamics and network plasticity. It aims to
understand seizure initiation, progression, and duration.

The second session will focus on the application of computational models to EEG data recorded in epileptic
patients. First, it will discuss advanced parameter inference methods to tailor models to individual data
samples to provide mechanistic insight. It then moves on to issues of seizure monitoring using wearable
devices and long-term EEG recordings, and in particular the use of data features inspired by concepts
derived from mathematical modeling in epilepsy.

The third session will examine stimulation-based strategies to terminate or prevent seizures. There will
be a focus on recent advancements in closed-loop and low-frequency electrical stimulation to control
seizures. On top of model-based approaches, this session will also include the clinical perspective on
stimulation treatment and data-driven studies.
Speakers
HS

Helmut Schmidt

Scientific researcher, Institute of Computer Science, Czech Academy of Sciences
JH

Jaroslav Hlinka

Senior researcher, Institute of Computer Science of the Czech Academy of Sciences
Currently                                I am leading the COBRA working group and also serve as the Head of the Department of Complex Systems and as the Chair of the Council of the Institute of Computer Science of the Czech Academy of Sciences.Brief bio After obtaining master degrees in Psychology from Charles University (2005) and in Mathematics from Czech Technical University (2006), I went on the quest of applying mathematics in helping to understand the complex activity of human bra... Read More →
GG

Guillaume Girier

Postdoc, INSTITUTE OF COMPUTER SCIENCE The Czech Academy of Sciences
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Room 202

09:00 CEST

Population activity: the influence of cell-class identity, synaptic dynamics, plasticity and adaptation
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Title: Population activity : the influence of cell-class identity, synaptic dynamics, plasticity and adaptation.

Organizers: 
Michele GIUGLIANO (co-organizer)
Università degli Studi di Modena e Reggio Emilia - Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze sede ex-Sc. Biomediche - Italy
michele.giugliano@unimore.it

Simona OLMI (co-organizer)
Institute for Complex Systems - National Research Council - Italy
simona.olmi@fi.isc.cnr.it

Alessandro TORCINI (co-organizer)
Laboratoire de Physique Théorique et Modélisation - CY Cergy Paris Université- Cergy-Pontoise, France
alessandro.torcini@cyu.fr

Abstract:
In recent years tremendous developments have been achieved in the comprehension of neural activity at the population level. This has been possible on one side thanks to the new investigation methods recently developed (e.g. the neuropixels probes and large-scale imaging) that allows for the contemporary registration of the activity of (tens/hundreds of) thousands of neurons in alive and behaving mice as well as established dynamic-clamp protocols.
On the other side by the elaboration of extremely refined mean field models able to describe the population activity of spiking neural networks encompassing realistic biological features, from different forms of synaptic dynamics to plastic and adaptive aspects present at the neural level.
The aim of this workshop is to gather neuroscientists, mathematicians, engineers, and physicists all working on the characterization of the population activity from different point of views, ranging from data analysis of experimental results to simulations of large ensembles of neurons, from next generation neural mass models to dynamical mean field theories. This workshop will favour the exchanges and the discussion on extremely recent developments in this extremely fluorishing field.

Key Words : Neuropixels probes; neural mass models; Fokker Planck formulation; dynamical mean field theory; short- term and long-term plasticity; excitatory and inhibitory balanced networks; spike frequency adaptation

Program:
July 8th -- Room 4
9:15-9:30 Opening
9:30-10:00 Anna Levina (University of Tübingen, Germany)
Talk title: "Balancing Excitation and Inhibition in connectivity and synaptic strength"
10:00-10:30 Giacomo Barzon (Padova Neuroscience Center, University of Padova, Italy)
Talk title: "Optimal control of neural activity in circuits with excitatory-inhibitory balance"

10:30-11:00 Coffee break

11:00-11:30 Eleonora Russo (Scuola Superiore Sant'Anna, The BioRobotics Institute, Italy)
Talk title “Integration of rate and phase codes by hippocampal cell-assemblies supports flexible encoding of spatiotemporal context”
11:30-12:00 Tobias Kühn (University of Bern, Switzerland)
Talk title: "Discrete and continuous neuron models united in field theory: statistics, dynamics and computation"
12:00-12:30 Gianluigi Mongillo (Sorbonne Université, INSERM, CNRS, Institut de la Vision, F-75012 Paris, France)
Talk title: “Synaptic encoding of time in working memory”

July 9th -- Room Hall 1A
9:30 - 10:00 Magnus J.E. Richardson (Warwick Mathematics Institute, UK)
Talk title: "Spatiotemporal integration of stochastic synaptic drive within neurons and across networks"
10:00-10:30 Gianni Valerio Vinci (Istituto Superiore di Sanita’, Rome, Italy)
Talk title: "Noise induced phase transition in cortical neural field: the role of finite-size fluctuations"

10:30-11:00 Coffee break

11:00-11:30 Simona Olmi (Institute for Complex Systems - National Research Council - Italy)
Talk title: “Relaxation oscillations in next-generation neural masses with spike-frequency adaptation”
11:30-12:00 Ferdinand Tixidre (CY Cergy Paris University, France)
Talk title: "Is the cortical dynamics ergodic? A numerical study in partially-symmetric networks of spiking neurons"
12:00-12:30 Letizia Allegra Mascaro (Neuroscience Institute, National Research Council, Italy)
Talk title: "State-Dependent Large-Scale Cortical Dynamics in Neurotypical and Autistic Mice"

12:30-14:00 Lunch break

14:00-14:30 Alessandro Torcini (CY Cergy Paris Université- Cergy-Pontoise, France)
Talk title : “Discrete synaptic events induce global oscillations in balanced neural networks"
14:30-15:00 Rainer Engelken (Columbia University, NY, United States)
Talk title:"Sparse Chaos in Cortical Circuits: Linking Single-Neuron Biophysics to Population Dynamics"
15:00-15:30 Tilo Schwalger (Technische Universität Berlin, Institut für Mathematik, Germany)
Talk title: "A low-dimensional neural-mass model for population activities capturing fluctuations, refractoriness and adaptation"

15:30-16:00 Coffee break

16:00-16:30 Giancarlo La Camera (Stony Brook University, NY, United States)
Talk title: “Prefrontal population activity during strategic behavior in context-dependent tasks”
16:30-17:00 Gorka Zamora-López (Universitat Pompeu Fabra, Barcelona, Spain)
Talk title: "Emergence and maintenance of modular hierarchy in neural networks driven by external stimuli"
17:00-17:30 Sacha van Albada (Research Center Juelich and University of Cologne, Germany)
Talk title: "Determinants of population activity in full-density spiking models of cerebral cortex"
Speakers
avatar for Alessandro TORCINI

Alessandro TORCINI

Professor, CY Cergy Paris Universite'
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Room 4

09:00 CEST

Workshop on Methods of Information Theory in Computational Neuroscience
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Workshop website: https://kgatica.github.io/CNS2025-InfoTeory-W.io/

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience. A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

This is the 20th iteration of this workshop at CNS --join us to celebrate!
Speakers
avatar for Joseph T. Lizier

Joseph T. Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →
avatar for Abdullah Makkeh

Abdullah Makkeh

Postdoc, University of Goettingen
My research is mainly driven by the aim of enhancing the capability of information theory in studying complex systems. Currently, I'm focusing on introducing novel approaches to recently established areas of information theory such as partial information decomposition (PID). My work... Read More →
avatar for Marilyn Gatica

Marilyn Gatica

Postdoctoral Research Assistant, Northeastern University London
Tuesday July 8, 2025 09:00 - Wednesday July 9, 2025 17:30 CEST
Room 203

10:30 CEST

Coffee break
Tuesday July 8, 2025 10:30 - 11:00 CEST
Tuesday July 8, 2025 10:30 - 11:00 CEST

10:40 CEST

12:30 CEST

Lunch break
Tuesday July 8, 2025 12:30 - 14:00 CEST
Tuesday July 8, 2025 12:30 - 14:00 CEST

14:00 CEST

Keynote #4: Maurizio Mattia
Tuesday July 8, 2025 14:00 - 15:20 CEST
Speakers
Tuesday July 8, 2025 14:00 - 15:20 CEST
Auditorium - Plenary Room

15:20 CEST

Conference photo
Tuesday July 8, 2025 15:20 - 15:30 CEST
Tuesday July 8, 2025 15:20 - 15:30 CEST
TBA

15:30 CEST

Coffee break
Tuesday July 8, 2025 15:30 - 16:00 CEST
Tuesday July 8, 2025 15:30 - 16:00 CEST

16:00 CEST

Member's meeting
Tuesday July 8, 2025 16:00 - 17:00 CEST
Tuesday July 8, 2025 16:00 - 17:00 CEST
Auditorium - Plenary Room

17:00 CEST

Poster session 3
Tuesday July 8, 2025 17:00 - 19:00 CEST
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P224: Four-compartment model of dopamine dynamics at the nigrostriatal synaptic site
Tuesday July 8, 2025 17:00 - 19:00 CEST
P224 Four-compartment model of dopamine dynamics at the nigrostriatal synaptic site

Alex G. O'Hare*1, 2, Catalina Vich1, 2, Jonathan E. Rubin3, 4, Timothy Verstynen3, 5

1Dept. de Matemátiques i Informática, Universitat de les Illes Balears, Palma, Illes Balears, Spain
2Institute of Applied Computing and Community Code, Palma, Illes Balears, Spain
3Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
4Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
5Department of Psychology & Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America

*Email: alex-gwyn.o-hare@uib.cat
Introduction

The traditional model of dopamine (DA) dynamics [1] posits that the level of extrasynaptic (tonic) DA modulates the effect of phasic burst firing which occurs in the event of a reward [2]. It is supposed that tonic DA, although present in low concentrations, has the capacity to activate DA synthesis and release modulating autoreceptors of the pre-synaptic DA neuron. Taking into account this traditional model as well as recent findings which demonstrate the capacity of tonic DA to also affect both D1 and D2 postsynaptic receptors [3], we develop a biologically realistic, yet computationally efficient 4-compartment model (see Fig.1) of DA action at the synaptic site, to elucidate the impact of DA dynamics on receptor occupancy and tonic DA levels.
Methods
DA is synthesised in the terminal of the presynaptic substantia nigra pars compacta (SNc) neuron, DAter, and released into the synaptic cleft, DAsyn at a rate dependent on DAter and the membrane voltage of the SNc neuron. From the synaptic cleft, DA is occupied by D1 or D2 receptors, the quantity of which is occupied we consider as the third compartment, DAocc. DAocc impacts the excitability and plasticity of postsynaptic spiny projection neuron (SPN) which receives inputs from a cortical neuron. DA is removed from DAsyn by reuptake into DAter and via diffusion to the extrasynaptic space, DAext. DA in DAsyn affects release via autoreceptors. DA in DAext affects synthesis autoreceptors and is removed from the system via diffusion.
Results
Preliminary symbolic analysis of the system in a supposed quasi-steady-state determined by setting the rate of synthesis equal to the rate of removal from DAext and letting the firing rate of the presynaptic neuron be constant, reveals a stable system according to the Routh-Hurwitz criterion, with either damped or no oscillations. Building on prior work [4] in which we developed an STDP model for cortico-striatal plasticity, we incorporate a presynaptic SNc neuron to analyse the effect of variations in parameters (limited within ranges of empirical data and using latin hypercube sampling) of the DA model on plasticity.
Discussion
Our four-compartment model of nigrostriatal dopamine dynamics bridges the gap between purely phenomenoligcal models which lack biological realism and more complex models which take into account a high degree of biological detail and are computationally expensive, thereby providing a solution for incorporating the effect of DA on corticostriatal plasticity in large scale spiking neural networks. In particular, our model may be of utility for simulations of dopaminergic reinforcement learning, such as in n-choice tasks, and simulations of DA-related pathologies which require explicit consideration of postsynaptic receptor occupation and extrasynaptic DA levels.



Figure 1. 4-compartment model of dopamine (DA) at the synaptic site. DAter: presynaptic terminal, DAsyn: synaptic cleft, DAocc: occupied postsynaptic receptors, DAext: extrasynaptic space. Pointed arrows indicate the transfer of DA from one compartment to another, with constants ki indicating the rate of transfer. Dotted arrows denote the modulatory effect of synthesis and release modulating autoreceptors.
Acknowledgements.
References
1. https://doi.org/10.1016/0376-8716(94)01066-t
2. https://doi.org/10.1126/science.275.5306.1593
3. https://doi.org/10.1523/jneurosci.1951-19.2019
4. https://doi.org/10.1016/j.cnsns.2019.105048
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P225: Time-to-first-spike encoding in layered networks evokes label-specific synfire chain activity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P225 Time-to-first-spike encoding in layered networks evokes label-specific synfire chain activity

Jonas Oberste-Frielinghaus1,2, Anno C. Kurth1, Julian Göltz3,4, Laura Kriener5,4,Junji Ito*1, Mihai A. Petrovici4, Sonja Grün1,6,7


1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
4Department of Physiology, University of Bern, Bern, Switzerland
5Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
6JARA Brain Institute I (INM-10), Jülich Research Centre, Jülich, Germany
6Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany


*Email: j.ito@fz-juelich.de
Introduction

While artificial neural networks (ANNs) have achieved remarkable success in various tasks, they lack two major characteristic features of biological neural networks: spiking activity and operation in continuous time. This makes it difficult to leverage knowledge about ANNs to gain insights into the computational principles of the real brains. However, training methods for spiking neural networks (SNNs) have recently been developed to create functional SNN models [1]. In this study we analyze the activity of a multilayer feedforward SNN trained for image classification and uncover the structures in both connectivity and dynamics that underlie its functional performance.

Methods
Our network is composed of an input layer (784 neurons), 4 hidden layers (300 excitatory and 100 inhibitory neurons in each layer), and an output layer (10 neurons). We trained it with backpropagation to classify the MNIST dataset, based on time-to-first-spike coding: each neuron encodes information in the timing of its first spike; the first neuron to spike in the output layer defines the inferred input image class [1]. The MNIST input is also provided as spike timing: dark pixels spike early, lighter pixels later. Based on the connection weights after training, neurons that have strong excitatory effects on each of the output neurons are identified in each layer. Note that one neuron can have strong effects on multiple output neurons.
Results
In response to a sample, the input layer generates a volley of spikes, identified as a pulse packet (PP) [2], which propagates through the hidden layers (Fig. 1). In deeper layers, spikes in a PP get more synchronized and the neurons providing spikes to the PP become more specific to the sample label. This leads to a characteristic sparse representation of the sample label in deep layers. The analysis of connection weights reveals that a correct classification is achieved by propagating spikes through a specific pathway across layers, composed of neurons with strong excitatory effects on the correct output neuron. Pathways for different output neurons become more separate in deeper layers, with less overlap of neurons between pathways.
Discussion
The revealed connectivity structure and the propagation of spikes as a PP agree with the notion of the synfire chain (SFC) [3,4]. To our knowledge, this is the first example of SFC formation by training of a functional network. In our network, multiple parallel SFCs emerge through the training for MNIST classification, representing each input label by activation of one particular SFC. Such a representation naturally leads to sparser encoding of the input label in deeper layers, and also increases the linear separability of layer-wise activity. Thus, the use of SFCs for information representation can have multiple advantages for achieving efficient computation, besides the stable transmission of information through the network.




Figure 1. Network activity in response to six different samples. Dots represent spike times of individual neurons, with colors indicating the luminance of the corresponding pixels in the sample (“input” layer), or spikes of excitatory (red) and inhibitory (blue) neurons (layers 1-4). The first neurons to spike in the “output” layer are indicated by numbers next to the spikes.
Acknowledgements
This research was funded by the European Union’s Horizon 2020 Framework programme for Research and Innovation under Specific Grant Agreements No. 785907 (HBP SGA2), No. 945539 (HBP SGA3) and No. 101147319 (EBRAINS 2.0), the NRW-network 'iBehave' (NW21-049), the Helmholtz Joint Lab SMHB, and the Manfred Stärk Foundation.

References
● Göltz et al. (2021). Fast and energy-efficient neuromorphic deep learning with first-spike times. Nature Machine Intelligence, 3(9), 823–835. https://doi.org/10.1038/s42256-021-00388-x
● Diesmann, Gewaltig, & Aertsen (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402(6761), 529–533. https://doi.org/10.1038/990101
● Abeles (1982). Local Cortical Circuits: An Electrophysiological Study. Springer-Verlag.
● Abeles (1991). Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P226: Astrocyte modulation of neural oscillations: mechanisms underlying slow wave activity in cortical networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P226 Astrocyte modulation of neural oscillations: mechanisms underlying slow wave activity in cortical networks

Thiago Ohno Bezerra*1, Antonio C. Roque1

1Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, Ribeirão Preto, São Paulo, Brazil.

*Email: thiagotakechi@usp.br

Introduction
Oscillatory activity plays a pivotal role in neural networks. Astrocytes have recently been shown to modulate neural activity through the release of glutamate and ATP, the latter acting as an inhibitory neuromodulator, and have been implicated in the regulation of slow cortical oscillations, specifically the up and down states observed in these networks. However, the mechanisms by which astrocytes influence neural oscillations and shape network activity remain poorly understood.


Methods
We extended the INEXA model [1] to incorporate the adaptation of neural activity. Neurons (N = 250) and astrocytes (N = 75, 30% of neurons) are randomly distributed in a 3D volume (750 × 750 × 10 µm3). Each neuron is modeled as a stochastic unit, where its spiking probability depends on neural excitatory and inhibitory inputs, ATP-mediated inhibition from astrocytes, an adaptive variable (u), and background noise (c = 0.03). The variable u increases after each neuronal spike and decays over time. Presynaptic neuron activity enhances IP3concentration in astrocytes, which elevates local Ca2+levels. Astrocyte activity is modeled as a stochastic process, driven by local Ca2+responses and the activation of neighboring astrocytes. Glutamate release from astrocytes promotes synaptic facilitation, influencing neuron-to-neuron communication. Connectivity between neurons and astrocytes is governed by a probabilistic rule based on spatial proximity.


Results
The model predicts that without astrocytes, neural networks oscillate at frequencies that vary according to the increment and decay rates of the variable u. These oscillations show no slow-wave patterns. In contrast, when astrocytes are included, the network exhibits three distinct activity modes: (1) high-frequency asynchronous spiking, (2) alternating between high-frequency spiking and silent states, and (3) regular synchronous spiking. The second mode, characterized by alternating states, is particularly reminiscent of cortical up and down states associated with slow oscillations. The specific mode of activity is influenced by the dynamics of the adaptive variable u, which modulates the frequency and pattern of oscillations. Astrocytic synaptic potentiation, ATP-mediated inhibition, and astrocyte activation duration also regulate the slow oscillation frequency.


Discussion
Our results suggest that astrocytes play an integral role in modulating the activity patterns of neural networks. Through the release of glutamate and ATP, astrocytes influence both excitatory and inhibitory processes, thereby altering network dynamics. These findings support the hypothesis that astrocytes are essential for the generation and regulation of slow oscillations in cortical networks, specifically in the context of up and down states. The modulation of these oscillations by astrocytic activity may provide a mechanism through which astrocytes influence cognitive processes associated with neural synchrony.



Acknowledgements
This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). TOB is supported by a FAPESP PhD scholarship (grant 2021/12832-7, BEPE: 2024/14422-9). ACR is partially supported by a CNPq fellowship (grant 303359/2022-6).
References
[1] Lenk, K., Satuvuori, E., Lallouette, J., Ladrón-de-Guevara, A., Berry, H., & Hyttinen, J. A. (2020). A computational model of interactions between neuronal and astrocytic networks: The role of astrocytes in the stability of the neuronal firing rate. Frontiers in Computational Neuroscience, 13, 92. https://doi.org/10.3389/fncom.2019.00092
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P227: Only a matter of time: developmental heterochronicity captures network properties of the human connectome
Tuesday July 8, 2025 17:00 - 19:00 CEST
P227 Only a matter of time: developmental heterochronicity captures network properties of the human connectome

Stuart Oldham*1,Francesco Poli2, Duncan Astle2, Gareth Ball1


1 Murdoch Children’s Research Institute, Melbourne, Australia
2 Cambridge University, Cambridge, UK


*Email: stuart.oldham@mcri.edu.au


Brain network organization is shaped by a trade-off between connection costs and functional benefits [1]. Computational generative network models have found this trade-off explains many, but not all, network properties [2,3]. During gestation, brain development proceeds according to spatiotemporal patterns defined by morphogen gradients [4]. Cortical areas display heterochronicity, differential timing of key developmental events, which induces spatial patterns that persist in later life as smoothly varying gradients in cytoarchitecture, neuronal connectivity, and functional activation [4,5]. Therefore, we developed a new computational model to assess how heterochronicity may constrain the formation of cortical connectivity.
Developmental timing was modeled along a unimodal gradient, originating from one node per hemisphere (Fig. 1A). Nodes were sequentially 'activated' over model timesteps based on their geodesic distance from the origin, with the timing/heterochronicity of activation controlled by the parameter τ, and the connection probability between active nodes governed by their wiring-cost η (Fig. 1B-C). The summed probabilities across timesteps used to generate a density matched network (Fig. 1C). The model was run for each origin, optimizing parameters to maximize model fit, which was the degree correlation ρ with empirical network (a group consensus structural connectivity brain network [2]), a feature generative models struggle to capture [2,3].
Spatial gradients modeling heterochronicity from the frontal cortex yielded the highest degree correlations (max ρ=0.39). These same networks with the best degree correlation also captured key empirical topological features, including clustering, connection length, and primary connectivity gradient. However, they did not fully replicate modularity or connection overlap (Fig. 1D). Models from the best origins (ρ>0.25) outperformed previous leading approaches[2,3] (Fig. 1E) and achieved the best fits with strong heterochronicity (τ > 0.5) and minimal distance penalties (η ≈ 0; Fig. 1F). Models using only the heterochronicity term produced similar degree correlations (Fig. 1G), suggesting it alone can drive brain-like connectivity patterns.
Here we demonstrate that constraining network connections to form along an anterior-posterior gradient is sufficient to capture topographical and topological connectomic features of empirical brain networks. These models also outperform past approaches [2,3]. The best-performing models imposed a heterochronous gradient that aligned with the rostral-caudal axis, a known major neurodevelopmental gradient[4,5], suggesting that early spatiotemporal patterning along this axis is key to shaping cortical connectivity. While our study examined single unimodal gradients, future studies could integrate multiple biologically informed gradients to better model network complexity. Our framework offers a flexible foundation for such extended work.




Figure 1. Fig. 1 (A) Geodesic distances from example origin (B) Heterochronicity/wiring-cost calculation (C) Model connection probability and network generation (D) Similarity to the empirical data on network features for each origin’s best fitting model (E) Comparison to previous models[2] (F) τ and η for each origin’s best fitting model (G) Best degree correlations for heterochronous-only models
Acknowledgements
S.O is supported by the Brain and Behavior Research Foundation (ID: 31471). G.B. was supported by the National Health and Medical Research Council (ID: 1194497). Research was supported by the Murdoch Children’s Research Institute, the Royal Children’s Hospital, Department of Paediatrics, The University of Melbourne and the Victorian Government’s Operational Infrastructure Support Program.
References
1.https://doi.org/10.1038/nrn3214
2.https://doi.org/10.1101/2024.11.18.624192
3.https://doi.org/10.1126/sciadv.abm6127
4.https://doi.org/10.1016/j.neuron.2007.10.010

5.https://doi.org/10.1016/j.tics.2017.11.002
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P228: Multi-scale Spiking Network Model of Human Cerebral Cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P228 Multi-scale Spiking Network Model of Human Cerebral Cortex

Renan O. Shimoura*1, Jari Pronold1,2, Alexander van Meegen1,3, Mario Senden4,5, Claus C. Hilgetag6, Rembrandt Bakker1,7, Sacha J. van Albada1,3



1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Institute of Zoology, University of Cologne, Cologne, Germany
4Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
5Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Centre, Maastricht University, Maastricht, The Netherlands
6Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
7Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands

*Email: r.shimoura@fz-juelich.de
Introduction

Data-driven models at cellular resolution have been built for various brain regions, yet few exist for the human cortex. We present a comprehensive point-neuron network model of a human cortical hemisphere integrating diverse experimental data into a unified framework bridging cellular and network scales [1]. Our approach builds on a large-scale spiking network model of macaque cortex [2,3] and investigates how resting-state activity emerges in cortical networks.

Methods
We constructed a spiking network model representing one hemisphere using the Desikan-Killiany parcellation (34 areas), with each area implemented as a 1 mm² microcircuit distinguishing the cortical layers. The model aggregates data across multiple modalities, including electron microscopy for synapse density, cytoarchitecture from the von Economo atlas [4], DTI-based connectivity [5], and local connection probabilities from the Potjans-Diesmann microcircuit [6]. Human neuron morphologies [7] inform the layer-specific inter-area connectivity. The full-density model, based on leaky integrate-and-fire neurons, comprises 3.47 million neurons with 42.8 billion synapses and was simulated using the NEST simulator on the JURECA-DC supercomputer.

Results
When local and inter-area synapses have the same strength, model simulations show asynchronous irregular activity deviating from experiments in terms of spiking activity and inter-area functional connectivity. When inter-areal connections are strengthened relative to local synapses, the model reproduces both microscopic spiking statistics from human medial frontal cortex and macroscopic resting-state fMRI correlations [8]. Analysis reveals that single-spike perturbations influence network-wide activity within 50-75 ms. The ongoing activity flows primarily from parietal through occipital and temporal to frontal areas, consistent with empirical findings during visual imagery [9].

Discussion
This open-source model integrates human data across scales to investigate cortical organization and dynamics. By preserving neuron and synapse densities, it accounts for the majority of the inputs to the modeled neurons, enhancing the self-consistency compared to downscaled models. The model allows systematic study of structure-dynamics relationships and forms a platform for investigating theories of cortical function. Future work may leverage the Julich-Brain Atlas to refine the parcellation and incorporate detailed cytoarchitectural and receptor distribution data [10]. The model code is publicly available athttps://github.com/INM-6/human-multi-area-model.




Acknowledgements
This work was supported by the German Research Foundation (DFG) Priority Program "Computational Connectomics" (SPP 2041; Project 347572269), the EU Grant 945539 (HBP), EBRAINS 2.0 Project (101147319), the Joint lab SMHB, and HiRSE_PS. The use of the JURECA-DC supercomputer in Jülich was made possible through VSR computation grant JINB33. Open access publication funded by DGF Grant 491111487.
References
[1] https://doi.org/10.1093/CERCOR/BHAE409.
[2] https://doi.org/10.1007/s00429-017-1554-4.
[3] https://doi.org/10.1371/journal.pcbi.1006359.
[4] https://doi.org/10.1159/isbn.978-3-8055-9062-4.
[5]https://doi.org/10.1016/J.NEUROIMAGE.2013.05.041.
[6] https://doi.org/10.1093/cercor/bhs358.
[7] https://doi.org/10.1093/CERCOR/BHV188.
[8] https://doi.org/10.1126/science.aba3313.
[9] https://doi.org/10.1016/J.NEUROIMAGE.2014.05.081.
[10] https://doi.org/10.3389/fnana.2017.00078.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P229: Exploring Electroencephalographic (EEG) Models of Brain Activity using Automated Modelling Techniques
Tuesday July 8, 2025 17:00 - 19:00 CEST
P229 Exploring Electroencephalographic (EEG) Models of Brain Activity using Automated Modelling Techniques

Nina Omejc*1, 2, Sabin Roman1, Ljupčo Todorovski1,3, Sašo Džeroski1

1Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
2Jozef Stefan International Postgraduate School, Ljubljana, Slovenia
3Department of Mathematics, Faculty of Mathematics and Physics, Ljubljana, Slovenia

*Email: nina.omejc@ijs.si
Introduction

Electroencephalography (EEG) is a clinical, non-invasive, high-temporal resolution technique for measuring whole-brain activity. However, the underlying mechanisms that give rise to the observed high-level rhythmic activity remain incompletely understood. Various neural population and network models attempt to explain these dynamics [1], but, to our knowledge, they have not been systematically explored or evaluated.
Methods
To explore the space of proposed and potential models, we represent brain networks as graphs, where nodes correspond to brain sources obtained via EEG source analysis, in our case the dipole fitted independent components (Figure 1). Each node’s dynamics are further categorized into three subdynamics: synapto-dendritic dynamics (input transformation), intrinsic dynamic, and firing response (output transformation). These subdynamics are defined by a bounded set of functions derived from the literature [1], or generated by an unbounded probabilistic context-free grammar [2]. Such a modular and unbounded specification allows for flexible and physiologically valid construction of the network.
Results
We are currently utilizing our Julia-based framework and are in the model evaluation phase. The dataset we use consists of 64-channel EEG recordings from 50 participants performing a visual flickering task, designed to induce steady-state visual evoked potentials [3]. We repeatedly sample potential EEG models using Markov Chain Monte Carlo and optimize the model parameters using CMA-ES algorithm. By the time of the conference, we aim to determine which established and previously unexamined whole-brain activity models can reproduce the observed oscillations, and, more importantly, which can also accurately capture the harmonics of the flickering stimulation frequency, a robust and interesting feature observed in this dataset.
Discussion
The presence of these harmonic components is a well-documented but not yet fully understood phenomenon in EEG research [4]. By systematically exploring different model configurations, we aim to assess which types of nonlinear models and which features (for example, recurrent connectivity, nonlinear synaptic integration, parallel computations, delays) play a crucial role in shaping these spectral patterns. Exploring the set of valid models to understand these mechanisms could have broader implications for theories of whole-brain neural activity and improve our understanding of EEG measurements.



Figure 1. Figure 1: A data-driven framework for exploring whole-brain network EEG models.
Acknowledgements
We would like to thank our department's SHED group for equation discovery for the fruitful discussions regarding our work.
References

[1]https://doi.org/10.1007/978-3-030-89439-9_13
[2]https://doi.org/10.1007/s10994-024-06522-1
[3]https://doi.org/10.1093/gigascience/giz002
[4]https://doi.org/10.1016/j.neuroimage.2012.05.054


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P230: Driver Nodes for Efficient Activity Propagation Between Clusters in Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P230 Driver Nodes for Efficient Activity Propagation Between Clusters in Spiking Neural Networks

Bulat Batuev1+,Arsenii Onuchin2,3+, Sergey Sukhov1


1Kotelnikov Institute of Radioengineering and Electronics of Russian Academy of Sciences, Moscow, Russia
2Skolkovo Institute of Science and Technology, Moscow, Russia
3Laboratory of Complex Networks, Center for Neurophysics and Neuromorphic Technologies, Moscow, Russia


+ These authors contributed equally

Email: arseniyonuchin04.09.97@gmail.com

Introduction

Synchronous neural activity is critical for brain function, yet the connectome's role in enabling synchronization remains unclear. We explore strategies to achieve widespread synchronization in spiking stochastic block model (SBM) networks with minimal control inputs. This work builds on research into neural network control [1], focusing on identifying driver nodes that influence dynamics. By evaluating centrality measures (betweenness, degree, eigenvector, closeness, harmonic, percolation), we pinpoint topological features predicting effective driver nodes. Furthermore, we analyze connectivity patterns to understand pairwise activity relationships and uncover mechanisms of network-wide coordination.

Methods
The spiking neural network consisted of 500 leaky integrate-and-fire neurons (80% excitatory, 20% inhibitory) divided into two clusters of equal size, with intra-cluster edge probability 0.15 and inter-cluster varying from 0.06 to 0.13. To simulate background activity, all neurons received independent Poisson-distributed inputs. Within the first cluster, a subpopulation of neurons (10–20%) was designated as driver neurons and subjected to an additional external current stimulus (10 Hz, 1000 pA). Driver neurons were selected either randomly or by centrality metrics (betweenness, degree, eigenvector, closeness, harmonic, percolation) [2]. Neural dynamics were simulated for 5 seconds to achieve steady-state activity using the Brian 2 [3].
Results
The population activity in the non-stimulated cluster was analyzed as a function of the number of driver neurons and the inter-cluster connectivity. When driver neurons were selected using closeness and betweenness centrality metrics, spike rates in the second cluster increased approximately 10-fold compared to random selection, accompanied by synchronization with the first cluster at nearly 10 Hz. In contrast, selecting driver neurons based on degree and percolation centrality metrics resulted only in 5-fold increase compared to random selection (Fig. 1).

Discussion
Synchronization between two weakly coupled clusters can be achieved by selectively stimulating specific neurons within the first cluster. However, it remains unclear why closeness and betweenness centrality outperform other centralities in promoting synchronization. Future research could focus on extending our method to multicluster heterogeneous systems. While the two-cluster model offers a controlled setting, expanding it could provide deeper insights into real brain connectomes. In conclusion, this study elucidates how topology and driver node selection shape neural synchronization, with potential applications in neuromodulation and brain-inspired systems.




Figure 1. The average population activity within the second cluster, calculated over a 1-second time window, is depicted for driver nodes selected based on various centrality measures (degree, betweenness, eigenvector centrality, PageRank, and percolation centrality) for the upper surface, and for nodes chosen at random for the lower surface.
Acknowledgements

This work was funded by the Russian Science Foundation (project number 24-21-00470).
References
1.Bayati, M., Valizadeh, A., Abbassian, A., & Cheng, S. (2015). Self-organization of synchronous activity propagation in neuronal networks driven by local excitation.Frontiers in Computational Neuroscience, 9, 69. https://doi.org/10.3389/fncom.2015.00069
2.Saxena, A., & Iyengar, S. (2020). Centrality measures in complex networks: A survey. arXiv preprint arXiv:2011.07190.
3.Stimberg, M., Brette, R., & Goodman, D. F. (2019). Brian 2, an intuitive and efficient neural simulator.eLife, 8, e47314. https://doi.org/10.7554/eLife.47314
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P231: Emotional network modelling: whole brain simulations of fear conditioning in humans
Tuesday July 8, 2025 17:00 - 19:00 CEST
P231 Emotional network modelling: whole brain simulations of fear conditioning in humans


Dianela A Osorio-Becerra1, Andrea Fusari1, Ashika Roy2, Danilo Benozzo1, Andreas Frick2, Egidio D’Angelo1, Fulvia Palesi1,Claudia Casellato1
1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
2Department of Medical Sciences, Uppsala University, Uppsala, Sweden
p { line-height: 115%; text-align: left; orphans: 2; widows: 2; margin-bottom: 0.1in; direction: ltr; background: transparent }a:link { color: #467886; text-decoration: underline }

*Email: claudia.casellato@unipv.it
Introduction

Emotion in mammals involves complex brain networks [1,2], for which it is critical to identify the specific regional connectivity and microcircuit properties. We couple in-silico brain dynamics in whole-brain simulations by The Virtual Brain – TVB [3] with experimental fear conditioning data in humans. These include MRI (DWI, resting-state and task-dependent fMRI) and fear-related behavioural measurements (skin conductance responses (SCR), a biomarker of emotional arousal [4]). This work represents a preliminary exploration on how data-driven subject-specific models of brain dynamics could predict emotional behaviour.
Methods
Data come from 17 healthy subjects. The fMRI is acquired at TR 3 s. The mean SCR is extracted for CS+ (conditioned stimulus paired with the unconditioned stimulus, US) and CS− (neutral stimulus), both during acquisition and extinction (acq_csp, acq_csm, ext_csp, ext_csm), and for US (electric shock) during acquisition (acq_us). The SCR usually increases with the paired CS/US pattern presentation, and it decreases when CS is no longer followed by US (extinction).
In the model, a fear-TVB network was defined selecting the regions involved, with each node represented by a reduced Wong-Wang model [5] and using the subject-specific structural connectivity from DWI. Then, the fear-TVB network was optimized in terms of global and local connection parameters, by maximizing the match between the subject-specific experimental functional connectivity matrices (static and dynamic – expFC and expFCD), obtained from resting-state fMRI, and the simulated ones (simFC and simFCD). Finally, these parameters were correlated with the subject-specific SCRs.
Results
The fear-TVB network was reconstructed using 88 nodes, including the amygdala, cerebellum, periaqueductal gray, and parts of the limbic system. The TVB parameters - i.e. global couplingGand three synaptic parameters (excitatory NMDA strengthJNMDA, inhibitory GABA strengthJi, recurrent excitationw+) - were extracted during the optimization process, see Fig.1. By correlating TVB parameters with SCR measures, a positive correlation betweenGandacq_us(ρ=0.37)emerged. However, higher correlation was found when considering sex separately, reinforcing the existing literature on this field.

Discussion
These findings suggest that individual differences in resting-state neural dynamics influence fear acquisition, with distinct mechanisms supporting US processing and conditioned fear discrimination. Although it is the first time that a correlation between network dynamics and fear responses is revealed, the relationship between global connectivity strength and fear responses is still weak, more data and closer understanding of the underlying network is needed. The next step is to use the fMRI data along fear conditioning trials by defining a time-dependent subject-specific TVB parameter space, which may correlate with the corresponding time-dependent fear responses.



Figure 1. a)Fear network b)Anterior and posterior views, 88 nodes (frontal, prefrontal, limbic, parietal, temporal, occipital, deep ganglia, brainstem and cerebellum) c)Violin plots of fear-measures and TVB parameters d)Exp and sim FC matrices, mean across subjects, each element is the Pearson Correlation Coefficient e)Exp vs sim: PCC for FC and the Kolmogorov–Smirnov distance for FCD, one point for subject
Acknowledgements
European Union's Horizon 2020 research under the Marie Sklodowska-Curie grant agreement No. 956414 for "Cerebellum and Emotional Networks", and #NEXTGENERATIONEU, by the Ministry of University and Research, National Recovery and Resilience Plan, project MNESYS (PE0000006)-A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022).
References
[1]https://doi.org/10.3389/fnsys.2023.1185752
[2]https://doi.org/10.1146/annurev.neuro.23.1.155
[3]https://doi.org/10.3389/fninf.2013.00010
[4]https://doi.org/10.1177/1094428116681073[5]https://doi.org/10.1523/JNEUROSCI.5068-13.201


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P232: Centralized brain networks underlie grooming body part coordination
Tuesday July 8, 2025 17:00 - 19:00 CEST
P232 Centralized brain networks underlie grooming body part coordination

Pembe Gizem Ozdil*1,2,Clara Scherrer1, Jonathan Arreguit2, Auke Ijspeert2, Pavan Ramdya1

1Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
2Biorobotics Laboratory, Institute of Bioengineering, EPFL, Lausanne, Switzerland

*Email: pembe.ozdil@epfl.ch
Introduction

Animals must coordinate multiple body parts to perform essential tasks such as locomotion and grooming. While locomotor coordination has been extensively studied [1,2], less is known about how the nervous system synchronizes movements across distant body parts, such as the head and legs, to execute complex behaviors. Antennal grooming inDrosophila melanogasterprovides a powerful model to study such coordination, as flies exhibit a rich repertoire of precisely controlled limb movements. With a compact yet fully mapped nervous system,Drosophilaenables circuit-level insights into how neural networks integrate motor commands for efficient multi-limb control.

Methods

Here, we combined behavioral analyses, biomechanical modeling, and connectome-based neural circuit simulations to investigate how flies coordinate head, antennae, and forelegs during grooming. We tracked detailed movement kinematics in freely behaving flies using 3D pose estimation[3]. To understand the functional role of coordination, recorded movements were replayed in a biomechanical simulation (NeuroMechFly [4]) to measure contact forces. To test proprioceptive contributions, we performed limb amputations and head immobilizations. Lastly, we analyzed the antennal grooming network using graph-based and computational neural network simulations derived from the brain connectome [5].

Results

Flies exhibit two main grooming strategies, unilateral and bilateral antennal grooming, each requiring precise coordination of head, antennae, and forelegs. Biomechanical simulations revealed that this coordination enhances grooming efficiency by avoiding obstructions and enabling forceful limb-antennal interactions. Manipulations showed proprioceptive feedback is not necessary for body-part synchronization, implying feedforward neural control. Connectome network analyses and simulations identified centralized interneurons forming recurrent excitatory and broad inhibitory circuit motifs that robustly synchronize motor modules. We further validated some model predictions through optogenetic experiments.


Discussion

We identified centralized neural circuits underlying multi-body-part coordination during antennal grooming in flies. Unlike locomotion, where coordination often depends on sensory feedback, grooming synchronization is centrally driven, likely reducing sensory processing demands. We uncovered two neural circuit motifs—recurrent excitation promoting targeted movements, and broadcast inhibition suppressing competing actions—that enable precise yet flexible coordination. This centralized circuit architecture may represent a general neural strategy conserved across behaviors and species, simplifying motor control and facilitating the evolution of complex behaviors through modular coordination.



Acknowledgements
PR acknowledges support from an SNSF Project Grant (175667) and an SNSF Eccellenza Grant (181239). JA acknowledges support from a European Research Council Synergy grant (951477). PGO acknowledges support from a Swiss Government Excellence Scholarship for Doctoral Studies and a Google PhD Fellowship.


References
● https://doi.org/10.1016/S0959-4388(98)80114-1
● https://doi.org/10.1152/jn.00658.2017
● https://doi.org/10.1016/j.celrep.2021.109730

● https://doi.org/10.1038/s41592-022-01466-7
● https://doi.org/10.1038/s41586-024-07558-y



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P233: Extracellular K+ hotspots regulate synaptic integration in the dendrites of pyramidal neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P233 Extracellular K+ hotspots regulate synaptic integration in the dendrites of pyramidal neurons


Malthe S. Nordentoft1, Naoya Takahashi2, Mathias S. Heltberg1, Mogens H. Jensen1, Rune N. Rasmussen3,Athanasia Papoutsi*4
1 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
2 Interdisciplinary Institute for Neuroscience, University of Bordeaux, Bordeaux, France
3 Center for Translational Neuromedicine, University of Copenhagen, Copenhagen, Denmark
4 Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology—Hellas, Crete, Greece

*Email:papoutsi@imbb.forth.gr



Introduction
Throughoutthenervoussystem,neuronal activity and ionic changes in the extracellular environment are bidirectionally linked. Changesintheconcentration of extracellularK+ions ([K+]o)are particularly intriguing due to its pivotal role in shaping neuronal excitability and its activity- and state-dependent fluctuation [1]. At the synaptic level, [K+]ochanges arise mainly from the activation of NMDA receptors and are highly localized [2]. Despite this experimental evidence, local, activity-dependent [K+]ochanges have not been considered an integral part of neuronal signaling. In this work [3], we hypothesize that [K+]ochanges form “K+hotspots” that locally regulate the active dendritic properties and shape sensory processing.
Methods
We focus on the organization of orientation-tuned synapses on dendrites of visual cortex pyramidal neurons [4], as we have previously shown that visual cortex responses are dynamically regulated by [K+]oand brain states [1]. We first analytically investigate the spatial diffusion of K+ions, to evaluate the creation of“K+hotspots”. Following, by treating orientation-tuned inputs to dendritic segments as statistical ensembles, we infer the expected changes in Δ[K+]oand the correspondingEK+shifts. Finally, using biophysically realistic models of a point dendrite anda morphologically detailed neuron, weevaluate theeffect of the differentEK+shifts in thedendritic spike propertiesand the neuronal output.
Results
Our statistical approach identified the expectedEK+shifts under different extracellular space sizes, intracellularK+concentration changes, and presented stimuli. Importantly, dendritic segments receiving similarly-tuned inputs attain substantially higher [K+]oandEK+shifts, with theEK+shifts being within the 6-18mV range. In the point dendrite model, this range ofEK+shifts broadens dendritic spikes and increases dendritic spike probability. Finally, in the morphologically detailed neuron models, we show that the local activity-dependent[K+]oincrease andEK+shifts in dendrites enhance the effectiveness of distal synaptic inputs to cause feature-tuned firing of neurons, without comprising feature selectivity.
Discussion
In this work [3] we show that dendrites receiving similarly-tuned inputs support activity-dependent, local changes in[K+]o, forming “K+hotspots”. These hotspots depolarizeEK+and increase the reliability and duration of dendritic spikes. These effects act as a volume knob of dendritic input, promoting gain amplification of neuronal output without affecting the feature selectivity. Overall, compared to long-term plasticity mechanisms, “K+hotspots” are transient, closely follow the overall dendritic activity levels and selectively boost integration of synaptic inputs with minimum usage of resources. Our results therefore suggest a prominent and previously overlooked role of [K+]ochanges.



Acknowledgements
We thank Akihiro Matsumoto, Alessandra Lucchetti, Eva Maria Meier Carlsen, Ioannis Matthaiakakis, and Stamatios Aliprantis for discussions and comments on this work.
References

1.https://doi.org/10.1016/j.celrep.2019.06.082
2.https://doi.org/10.1016/j.celrep.2013.10.026
3.https://doi.org/10.1371/journal.pbio.3002935*PLoS Biology Issue Image | Vol. 22(12) January 2025
4.https://doi.org/10.1038/s41467-019-13029-0


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P234: Complexity of an astrocyte-neuron network model in random and hub-driven connectivity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P234 Complexity of an astrocyte-neuron network model in random and hub-driven connectivity

Paolo Paradisi*1,2,Giulia Salzano3,Marco Cafiso1,4, Enrico Cataldo4



1ISTI-CNR-Institute of Information Science and Technologies “A. Faedo”, Pisa, Italy


2BCAM-Basque Center for Applied Mathematics, Bilbao, Basque Country, Spain


3Department of Neuroscience, International School for Advanced Studies, Trieste, Italy


4Department of Physics, University of Pisa, Pisa, Italy



Introduction



The role of glial cells, particularly astrocytes, in brain neural networks has been historically


overlooked due to a neuron-centric perspective. Recent research highlights astrocytes’


involvement in synaptic modulation, memory formation, and neural synchronization, leading to


their inclusion in mathematical brain models. Concurrently, network topology plays a critical role


inneural function, with models such as random and scale-free networks offering insights into


connectivity patterns. In this work we present the investigation of a recently published astrocyte-


neuron network model [1,2], hereafter named SGBD model, consisting of excitatory and


inhibitory



leaky-integrate and fire (LIF) neural models endowed with astrocytes, activated by synaptic


transmission and modulating


Methods
Firstly, a modified version of the model is proposed in order to overcome the limitations of the SGBD model by incorporating biologically plausible features that are more compatible with the experimental results, in particular with regard to the spatial distribution of inhibitory neurons, astrocyte dynamics such as to trigger more realistic calcium oscillations, and neuron-astrocyte connections that are more intuitively linked to their spatial positioning. Then, the role of neuron-neuron connectivity is investigated by comparing randomandhub-driven connectivitiesinboth incoming and outcoming connections.Simulations are implemented using the Brian2 simulator, allowing for a comparative analysis of neural network activity with and without astrocytes.




Results
The proposed modifications lead to a more biologically realistic representation, influencing firing rates and inter-spike interval distributions. Comparisons between random and hub-driven connectivity highlight differences in network efficiency, in particular firing activity is much larger for hub-driven connectivity even if the number of links is much lower with respect to the random connectivity. Temporal complexity of avalanches is investigated through intermittency-driven complexity tools [3,4] and significant differences are found when comparing both random vs. hub-driven and astrocyte vs. no-astrocyte.

Discussion
This study reinforces the importance of astrocytes in neural network modeling and demonstrates how connectivity patterns impact temporal complexity of firing patterns. Hub-driven degree distribution is not strictly scale-free, i.e., does not display power-law decay, but, despite this, hub-driven topology triggers the emergence of power-law behavior in the inter-spike time distributions that does not emerge in the random connectivity. Similar findings are seen in the temporal complexity of neural avalanches, where different regimes of power-law scaling behavior are found.







Acknowledgements
This work was supported by the Next-Generation-EU programme under the funding schemes






































PNRR-PE-AI scheme (M4C2, investment 1.3, line on AI) FAIR “Future Artificial Intelligence






































Research”, grant id PE00000013, Spoke-8: Pervasive AI.






































References












[1] M. Stimberg et al. (2019), Brian 2, an intuitive and efficient neural simulator, elife8, e47314.










































doi:10.7554/eLife.47314


















































[2] M. Stimberg et al. (2019), Modeling Neuron–Glia Interactions with the Brian 2 Simulator,












































Springer, Cham, 471–505. doi:10.1007/978-3-030-00817-8_18




















































[3] P. Paradisi, P. Allegrini, Intermittency-driven complexity in signal processing (2017), Springer,






































Cham, 161–195. doi: 10.1007/978-3-319-58709-7_6
















































[4] P. Paradisi et al., The emergence of self-organization in complex systems-Preface (2015),










































Chaos Sol. Fract.81b, 407-411. doi: 10.1016/j.chaos.2015.09.017




























































Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P235: Beyond the Response: How Post-Response EEG Signals Improve Lie Detection
Tuesday July 8, 2025 17:00 - 19:00 CEST
P235 Beyond the Response: How Post-Response EEG Signals Improve Lie Detection

Hanbeot. Park¹, Hoon-hee. Kim*²

¹ Department of Data Engineering, Pukyong National University, Busan, Korea
²Department ofComputerEngineeringand Artificial Intelligence, Pukyong National University, Busan, Korea


*Email:h2kim@pknu.ac.kr





Introduction

In modern society, lies intentional or not are widespread and impose cognitive burdens and neurophysiological changes. Lying produces psychological tension, extra cognitive processing, and emotional strain, which are reflected in distinct neural activity patterns. While earlier lie detection studies focused on response EEG signals, recent research suggests that response activity capturing further evaluation and lingering tension provides critical information for distinguishing deception from truth.[1]EEG from 12 subjects were recorded during responses and for 15 seconds post-response. Extracted features were classified using a sliding window machine learning approach, with post-response features, enhancing classification performance.


Methods
Using a uniform 64 channel EEG system, this study investigated deception by recording EEG from 12 subjects who answered six questions under lie or truth conditions. Data were recorded during the response period and for 15 seconds post-response. Preprocessing steps included bandpass filtering, notch filtering, artifact removal, average referencing, and downsampling. To capture both local and long-term patterns, Fig1 is a multi-layer model was built by combining SSM based Mamba[2]with the MoE[3]technique. Statistical features and neural features were extracted. EEG data were segmented into 0.5 second windows with a 0.025 second overlap, and question-level cross-validation identified the most informative time interval for lie detection.

Results
The classification model evaluation confirmed that EEG features from various time intervals significantly differentiate lies from truth, as shown by question-level cross-validation. Features from the post-response interval significantly outperformed those from the pre-response interval (P < 0.005), with the effective features achieving a performance improvement of 0.150 ± 0.007. Moreover, intervals covering the entire post-response period yielded the best results. Notably, skewness, kurtosis, zero crossing, and sample entropy effectively capture the non-linear, dynamic EEG changes associated with additional cognitive processingresponses after answering, underscoring their potential as key neurophysiological indicators for lie detection.

Discussion
Using question-level CV, this study confirmed that several statistical and neurophysiological EEG features from the post-response interval significantly enhanced lie detection performance compared to those from the pre-response interval (P < 0.005). These findings suggest that subjects sustain tension and engage in extra cognitive processing after responding, producing distinct neural patterns of deception. Although the small sample size and use of question-level CV may limit generalizability, post-response EEG data provided more stable and reliable neural patterns. Future studies should use subject-level CV and further explore the optimal duration of the post-response interval.




Figure 1. Overall Structure of Lie Detection. This architecture employs data processing and feature extraction, followed by a multi-layer model that leverages Mamba and MoE.
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
J. Gao et al., “Brain Fingerprinting and Lie Detection: A Study of Dynamic Functional Connectivity Patterns of Deception Using EEG Phase Synchrony Analysis,” IEEE J Biomed Health Inform, vol. 26, no. 2, pp. 600–613, Feb. 2022, doi: 10.1109/JBHI.2021.3095415.
A. Gu and T. Dao, “Mamba: Linear-Time Sequence Modeling with Selective State Spaces,” Dec. 2023, [Online]. Available: http://arxiv.org/abs/2312.00752
N. Shazeer et al., “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,” Jan. 2017, [Online]. Available: http://arxiv.org/abs/1701.06538
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P236: Arousal-driven parametric fluctuations augment computational models of dynamic functional connectivity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P236 Arousal-driven parametric fluctuations augment computational models of dynamic functional connectivity

Anagh Pathak1*, Demian Battaglia1

1Laboratoire De Neurosciences Cognitives et Adaptives , University of Strasbourg, France

*Email: a.pathak@unistra.fr


Introduction

Functional Connectivity (FC) quantifies statistical dependencies between brain regions but traditionally assumes stationarity. Dynamic Functional Connectivity (DFC) captures temporal fluctuations, offering insights into cognition and brain disorders [1]. However, DFC’s interpretation is debated, with concerns about neural vs. non-neural origins [2]. Arousal fluctuations, driven by neuromodulation, likely shape DFC. This study extends whole-brain models by incorporating time-varying neuromodulatory inputs, improving the replication of empirical DFC patterns. Findings suggest arousal plays a crucial role in DFC dynamics, refining our understanding of brain network organization.
Methods
The study analyzes resting-state fMRI data from 100 individuals in the Human Connectome Project [3]. Whole-brain models were built using structural connectivity data, employing two autonomous models: an oscillatory Stuart-Landau model [4] and a multistable Wong-Wang model [5]. A time-dependent modification, modeled as a Ornstein-Uhlenbeck process (tMFM) was introduced in the global excitability term. Dynamic Functional Connectivity (DFC) , measured using a sliding window approach and DFC speeds served as the model fitting targets. A Genetic Algorithm optimized model parameters by fitting simulated data to empirical observations, using statistical metrics (AIC/BIC) to compare model performance.
Results
Dynamic Functional Connectivity (DFC) was analyzed in resting-state fMRI using a sliding window approach, revealing two distinct phenotypes: drift and pulsatile. The drift phenotype showed a gradual slowing of dynamics, while the pulsatile phenotype exhibited brief, well-defined epochs of slow events. Two modeling approaches were explored: the eMFM (noise-driven bistability) and the MOM model (metastable oscillatory dynamics). Both generated transient DFC but failed to fully capture empirical patterns. Introducing arousal-linked modulations in excitability (tMFM) significantly improved model fit, with linear drift capturing drift phenotypes and mean-reverting dynamics modeling pulsatile phenotypes.

Discussion
This study explores how incorporating time-varying parameters, specifically arousal-linked fluctuations, improves dynamic functional connectivity (DFC) modeling. Traditional models assume time-invariant dynamics, but evidence suggests cortical excitability varies with arousal. By integrating stochastic arousal terms into the eMFM framework (tMFM), we show that DFC is better captured as a time-dependent process. Compared to the oscillatory MOM model, tMFM more accurately reproduces empirical DFC patterns, though future work could extend MOM to include neuromodulatory influences. Additionally, linking DFC with pupillometry—an arousal proxy—could further refine models, offering deeper insights into neuromodulation, brain states, and cognition.



Acknowledgements
The authors acknowledge support from PEPR BHT, Fondation Vaincre Alzheimers, CNRS and the University of Strasbourg
References
1.https://doi.org/10.1016/j.neuroimage.2013.05.0792.https://doi.org/10.1162/imag_a_00366
3.10.1016/j.neuroimage.2016.05.062
4.https://doi.org/10.1038/s42005-022-00950-y
5.https://doi.org/10.1016/j.neuroimage.2014.11.001









Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P237: Modelling of ensemble of signals in single axons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P237 Modelling of ensemble of signals in single axons

Tanel Peets*1, Kert Tamm1, Jüri Engelbrecht1,2


1Department of Cybernetics, Tallinn University of Technology, Tallinn, Estonia
2Estonian Academy of Sciences, Tallinn, Estonia


*Email: tanel.peets@taltech.ee

Introduction
Since Hodgkin and Huxley’s classical works, it has become clear that nerve function is a richer phenomenon than just electrical action potentials (AP). Experimental observations demonstrate that electrical signals in nerve fibres are accompanied by mechanical and thermal effects[1,2,3]. These include the pressure wave (PW) in axoplasm, the longitudinal wave (LW) in biomembrane, the transverse displacement (TW) of a biomembrane and temperature changes (θ). The whole nerve signal is, therefore, an ensemble of primary waves accompanied by the secondary components. The primary components (AP, LW, PW) are characterised by corresponding velocities and the secondary components (TW, θ) are derived from the primary components and have no independent velocities on their own.
Methods
We present a coupled mathematical model [2] which unites the governing equations for the action potential, the pressure wave in the axoplasm and the longitudinal and the transverse waves in the surrounding biomembrane and corresponding temperature change into one system of equations. The electrical AP is the carrier of information and triggers all other processes. The main attention is on modelling effects accompanying the AP, therefore the AP itself is modelled by the simple FitzHugh-Nagumo model. Coupling effects are modelled by contact forces. The system of nonlinear partial differential equations is solved numerically making use of the pseudospectral method.
Results
As a proof of concept, a simple dimensionless model based on the description of physical effects is described involving all the components of the signal. The results obtained by the numerical simulation match qualitatively well with experimentally measured ones.
Discussion

The model described in this contribution is an attempt to couple all the measurable effects of the signal propagation in nerves (axons) into a system. The attention is not on the detailed description of the AP but on possible accompanying mechanical and thermal effects and their coupling with each other. The governing equations for the elements of the ensemble stem from the laws of physics and form a consistent system. This is an interdisciplinary approach at the interface of physiology, physics, and mathematics[2].



Acknowledgements
This research was supported by the Estonian Research Council (PRG 1227). Jüri Engelbrecht acknowledges the support from the Estonian Academy of Sciences.
References
[1]https://doi.org/10.1016/S0006-3495(89)82902-9
[2]https://doi.org/10.1007/978-3-030-75039-8
[3]https://doi.org/10.1073/pnas.192003911
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P238: Identifying cortical learning algorithms using brain-machine interfaces
Tuesday July 8, 2025 17:00 - 19:00 CEST
P238 Identifying cortical learning algorithms using brain-machine interfaces

Sofia Pereira da Silva1,2, Denis Alevi1, Friedrich Schuessler*1,3, Henning Sprekeler*1,2,3


1 Modelling of Cognitive Processes, Technische Universität Berlin, Berlin, Germany
2 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
3 Science of Intelligence, Technische Universität Berlin, Berlin, Germany

Email: sofia@bccn-berlin.de
Introduction

By causally mapping neural activity to behavior [1], brain–machine interfaces (BMI) offer a means to study the dynamics of sensorimotor learning. Here, we investigate the neural learning algorithm monkeys use to adapt to a changed output mapping in a center-out reaching task [2]. We exploit that the mapping from neural space (ca. 100 dimensions) to the 2D cursor position is a credit assignment problem [3] that is underconstrained, because changes along a large number of output-null dimensions do not influence the behavioral output. We hypothesized that different, but equally performing learning algorithms can be distinguished by the changes they generate in output-null dimensions.

Methods
We combine computational modeling and data analysis to study the neural algorithms underlying learning in the BMI center—out task. We implement networks with three different learning rules (gradient descent, model-based feedback alignment, and reinforcement learning) and three distinct learning strategies (direct, re–aiming [4], remodeling [5]) in feedforward and recurrent architectures. The models’ initial conditions are constrained using publicly available data from BMI experiments [6, 7,8]. We train the models in cursor space and use linear regression to compare the resulting changes in neural space to the data.


Results
We first verify that all implemented algorithms can learn the task in cursor space. In terms of neural activity, we find that various combinations of rules and architectures lead to changes in different low–dimensional subspaces. For instance, re-aiming is, by definition, constrained to a lower-dimensional subspace, so the neural activity changes across algorithms within this strategy are more similar than those in other strategies. Comparing the changes in neural activity and their subspaces with available data from BMI experiments points to learning as a combination of different algorithms. However, not all variance is explained by the algorithms, indicating additional changes outside the modeled subspaces.

Discussion
Bridging BMI experiments and population dynamics analyses creates a framework to study how learning unfolds in the brain. Our results suggest that monkeys employ a combination of previously suggested strategies to learn BMI tasks, involving both model-based and model-free learning. Future work should explore models with recurrent architectures further to better capture biological dynamics. Moreover, applying methods that describe the learning manifolds and trial-to-trial variability could offer interesting insights for comparing the models and data. Finally, comparing our findings with longitudinal datasets that monitor the learning process over time would be valuable for understanding how the learning dynamics progress.





Acknowledgements

References
1.https://doi.org/10.1016/j.conb.2015.12.005
2.https://proceedings.neurips.cc/paper_files/paper/2022/hash/a6d94c38506f16fb50894a5b555f2c9a-Abstract-Conference.html
3.https://doi.org/10.1371/journal.pcbi.1008621
4.https://doi.org/10.1101/2024.04.18.589952
5.https://doi.org/10.7554/eLife.10015
6.https://doi.org/10.1038/s41593-018-0095-3
7.https://doi.org/10.1038/s41593-021-00822-8
8.https://doi.org/10.7554/eLife.36774
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P239: Striatal endocannabinoid long-term potentiation mediates one-shot learning
Tuesday July 8, 2025 17:00 - 19:00 CEST
P239 Striatal endocannabinoid long-term potentiation mediates one-shot learning

Charlotte PIETTE1, Arnaud HUBERT2,3, Sylvie PEREZ1, Hugues BERRY2,3, Jonathan TOUBOUL4#, Laurent VENANCE1#


1Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology, Collège de France, CNRS, INSERM, Université PSL, 75005 Paris, France
2INRIA, Villeurbanne, France
3University of Lyon, LIRIS UMR5205, Villeurbanne, France4Brandeis University, MA Waltham, USA

#: co-senior authors


Correspondence:laurent.venance@college-de-france.fr,jtouboul@brandeis.edu



Introduction

One-shot learning - the behavioral and neuronal mechanisms underlying the acquisition of a long-term memory after a unique and brief experience -, is a crucial mechanism for developing adaptive responses. Yet its neural correlates remain elusive(see for review: Piette et al., 2020). Here, we aimed at elucidating how changes in cortico-striatal dynamics contribute to one-shot learning. Considering that a brief exposure to a stimulus involves only a few spikes and based on our earlier work uncovering a new form of endocannabinoid-dependent synaptic potentiation (eCB-LTP) induced by a very low number of temporally coupled cortical and striatal spikes (Cui et al., 2015 & 2016; Xu et al. 2018), we hypothesize that the endocannabinoid system could underlie striatal one-shot learning.




Methods

We first developed a one-shot learning test in which mice learn to avoid contact with an adhesive tape after a single exposure. We then usedin vivoandex vivoelectrophysiological recordings in the striatum of behaving mice to probe cortico-striatal plasticity and the specific contribution of the endocannabinoid system. In addition, based on Neuropixels recordings of cortical and striatal neuronsin vivo, we developed a mathematical model to test the induction of eCB-LTP. Finally, we test the performance of transgenic mouse strains in which eCB-LTP is altered and in mice in which local striatal infusion of drugs prevent either NMDA or eCB-mediated plasticities.

Results
The “sticky tape avoidance test” proved an efficient one-shot learning test, since following a single and short (< 20 seconds) uncomfortable contact with an adhesive tape, mice avoided further contact. We found a cortico-striatal long-term synaptic potentiation emerged 24h after short contacts with the tape. Furthermore, thedetailed computational model of cortico-striatal synapse predicted an increased occurrence of eCB-LTP induction events during contact.Indeed,ex vivowhole-cell patch-clamp recordings revealed an occlusion of eCB-LTP in mice shortly exposed to the sticky tape. In addition, we showed that eCB-LTP knock-out mice and AM251-infused mice exhibited impaired one-shot learning, while no significant difference was observed between D-AP5 and saline-infused mice.


Discussion
These multiple approaches demonstrate that eCBs underlie one-shot learning.Overall,these findings revisit the recently challenged view that dorsolateral striatum is involved mostly in habit formation. For the first time, they outline the temporal and activity-dependent boundaries delineating the expression of a synaptic plasticity pathway within a learning paradigm. Such insights into the nature and roles of eCB-based plasticity will also offer keys to interpreting the wide array of functions of the eCB system.






Acknowledgements
We thank S. R. Datta and the Venance lab members for helpful suggestions and critical comments on the manuscript. Camille Chataing and Emma Idzikowkski for their help on the behavioral experiments at one-month retrieval interval. Yves Dupraz (CIRB micromechanics workshop) for the building of the arenas, cross-maze and electrophysiology micromechanics.
References
1. Piette, C., Touboul, J., Venance, L. (2020). Engrams of fast learning.Front. Cell. Neurosci.,14. 10.3389/fncel.2020.575915
2.Cui Y., et al.(2015).Endocannabinoids mediate bidirectional striatal spike-timing-dependent plasticity.J. Physiol.593, 2833–2849. 10.1113/JP270324
3. Cui Y., et al.(2016).Endocannabinoid dynamics gate spike-timing dependent depression and potentiation.eLife5:e13185. 10.7554/eLife.13185
4. Xu H., et al.(2018).Dopamine-endocannabinoid interactions mediate spike-timing-dependent potentiation in the striatum.Nat. Commun.9:4118. 10.1038/s41467-018-06409-5

Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P240: Deep brain stimulation restores information processing in parkinsonian cortical networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P240 Deep brain stimulation restores information processing in parkinsonian cortical networks

Charlotte Piette1,2, Sophie Ng Wing Tin3,4, Astrid De Liège5, Coralie Bloch-Queyrat6, Bertrand Degos1,5#, Laurent Venance1#, Jonathan Touboul2#

1Dynamics and Pathophysiology of Neuronal Networks Team, Center for Interdisciplinary Research in Biology, Collège de France, CNRS, INSERM, PSL University, 75005 Paris, France
2Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, MA Waltham, USA
3Service de Physiologie, Explorations Fonctionnelles et Médecine du Sport,Assistance Publique-Hôpitaux de Paris(AP-HP), Avicenne University Hospital, Sorbonne Paris Nord University, 93009 Bobigny, France
4Inserm UMR 1272,Sorbonne Paris Nord University, 93009 Bobigny, France
5Department of Neurology, Avicenne University Hospital, Sorbonne Paris Nord University, 93009 Bobigny, France
6Department of Clinical Research, Avicenne University Hospital, Assistance Publique-Hôpitaux de Paris (AP-HP), 93009, Bobigny, France


Corresponding authors: jtouboul@brandeis.edu ; charlotte_piette@hms.harvard.edu
Introduction

Parkinson’s disease (PD) is characterized by alterations of neural activity and information processing in the basal ganglia and cerebral cortex, including changes in excitability (Lindenbach and Bishop, 2013; Valverde et al., 2020) and abnormal synchronization (Goldberg et al., 2002) in the motor cortex of PD patients and PD animal models. Deep Brain Stimulation (DBS) provides an effective symptomatic treatment in PD but its mechanisms of action, enabling the restoration of efficient information transmission through cortico-basal ganglia circuits, remain elusive. Here, we developed a computational framework to test DBS impact on cortical network dynamics and information encoding depending on the network’s initial levels of excitability and synchronization.


Methods
We extended a computational model initially developed in our previous work (Valverde et al., 2020) to analyze the responses of a spectrum of cortical pathological networks, characterized by their level of activity and synchronization, to various input patterns.This way, we could compare their capacity of encoding and transmitting information, before and after DBS stimulation.To further test the hypothesis that DBS positively impacts cortical information transmission in the clinics, we investigated whether PD treatment could improve the ability to predict movement from electroencephalograms collected in human parkinsonian patients(collected in the Neurology Department of Avicenne Hospital, Bobigny).



Results
We observed thatDBS efficiently reduces the firing rate in a large spectrum of parkinsonian networks, and in doing so can decrease abnormal synchronization levels. In addition, DBS-mediated improvements of information processing were most exacerbated in synchronized regimes. Interestingly, DBS efficiency was modulated by the configuration of the cortical circuit such that optimal DBS parameters varied depending on the pathological cortical activity and connectivity profile. We further validated our hypothesis in the clinics and found that the accuracy of decoding movement identity from cortical dynamics was worse when DBS was turned off and correlated with the extent of drug treatment.



Discussion
Overall, this work highlights how DBS improves information encoding by resetting cortical networks into highly responsive states. Cortical networks therefore stand as a privileged target for alternative therapies and adaptive DBS. Our final experiments on human electrophysiology open newperspectives for adaptively tuning DBS parameters, based on clinically accessible measures of cortical information processing capacity.





Acknowledgements
We thank J.E. Rubin, P. Miller, the members of the LV and JT laboratory for their helpful suggestions and critical comments. We thank theService de Physiologie, Explorations Fonctionnelles et Médecine du Sport, Avicenne University Hospital, and the Clinical Research Unit of Avicenne University Hospital, for making the EEG recordings possible.
References
1. Lindenbach, D., & Bishop, C. (2013). Critical involvement of the motor cortex in the pathophysiology and treatment of Parkinson’s disease.Neurosci. & Biobehavioral Rev.,37(10), 2737–2750.
2. Valverde, S., et al.(2020). Deep brain stimulation-guided optogenetic rescue of parkinsonian symptoms.Nat. Comm.,11(1), 2388.
3.Goldberg, J. A., et al. (2002). Enhanced synchrony among primary motor cortex neurons in the MPTP primate model of Parkinson’s disease.J. Neurosci.,22(11), 4639–4653.
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P241: Parameter Estimation in Differentiable Whole Brain Networks: Methodological Explorations and Practical Limitations
Tuesday July 8, 2025 17:00 - 19:00 CEST
P241 Parameter Estimation in Differentiable Whole Brain Networks: Methodological Explorations and Practical Limitations

Marius Pille* ¹ ², Emilius Richter¹ ², Leon Martin¹ ², Dionysios Perdikis¹ ², Michael Schirner¹ ² ³ ⁴ ⁵, Petra Ritter¹ ² ³ ⁴ ⁵

¹ Berlin Institute of Health (BIH) at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
² Department of Neurology with Experimental Neurology, Charité, Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
³ Bernstein Focus State Dependencies of Learning and Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany
⁴ Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117, Berlin, Germany
⁵ Einstein Center Digital Future, Wilhelmstraße 67, 10117, Berlin, Germany

*Email: marius.pille@bih-charite.de
Introduction

Connectome-based brain network modelling, facilitated by platforms like The Virtual Brain (TVB), has significantly advanced computational neuroscience by providing a framework to decipher the intricate dynamics of the brain. However, existing techniques for inferring physiological parameters from neuroimaging data, such as functional magnetic resonance imaging, magnetoencephalography and electroencephalography, are often constrained by computational costs, developer effort, and limited available data, blocking translation [1].

Methods

Differentiable Models [2] address these limitations by enabling the application of state-of-the-art parameter estimation techniques from machine learning, particularly the family of stochastic gradient descent optimizers. We reformulated brain network models using highly optimized differentiable libraries, creating generalized, composable building blocks for complex modeling problems. This approach was tested across different types of neural mass models with various neuroimaging data types, to demonstrate advantages and limitations.

Results

Our differentiable framework demonstrates performance improvements of one to two orders of magnitude compared to classical TVB implementations, with the added benefit of easy parallelization across devices like GPUs. By leveraging a computational knowledge base for brain simulation [3], our approach preserves flexibility while accommodating diverse neural mass models. We established documented workflows for the most common modeling problems, building from low to high complexity, to enhance accessibility. Limitations of differentiable models, where the proximity to bifurcation points can lead to unstable gradients, are explored and potential solutions are proposed, drawing from the field of classical neural networks [4].

Discussion

This work aims to contribute to the translation of brain network models from foundational research to clinical applications by addressing existing roadblocks [1]. By creating reusable, composable components rather than specific solutions, we provide a versatile framework that can adapt to diverse research questions. The significant performance improvements enable more complex hypotheses to be tested and potentially bring computational neuroscience tools closer to practical clinical implementation.





Acknowledgements
I would like to express my sincere gratitude to my supervisors for their continuous feedback and valuable advice on this work. Special thanks to Petra Ritter for her guidance and for providing all the necessary resources that made this research possible.
References
[1] Fekonja, L. S. et al. (2025). Translational network neuroscience: Nine roadblocks and possible solutions. Network Neuroscience, 1–19. doi.org/10.1162/netn_a_00435
[2] Sapienza, F. et al. (2024). Differentiable Programming for Differential Equations: A Review. arXiv. arxiv.org/abs/2406.09699
[3] Martin, L. et al. (in preparation). The Virtual Brain Ontology: A computational knowledge space generating reproducible models of brain network dynamics.
[4] Pascanu, R. et al. (2013). On the difficulty of training Recurrent Neural Networks. arXiv. doi.org/10.48550/arXiv.1211.5063
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P242: Cytoelectric coupling: How electric fields tune Hebb’s cell assemblies
Tuesday July 8, 2025 17:00 - 19:00 CEST
P242 Cytoelectric coupling: How electric fields tune Hebb’s cell assemblies

Dimitris A. Pinotsis1,2, Earl K. Miller2


1Department of Psychology, City St George's —University of London, London EC1V 0HB, United Kingdom
2 The Picower Institute for Learning & Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA


*Email: pinotsis@mit.edu

Introduction

Hebb introduced cell assemblies in his seminal work about 70 years ago. Today, cell assemblies are thought to describe groups of neurons coactivated when a certain memory, thought or percept is stored or processed. Here, we consider electric fields generated by cell assemblies.

Methods
We analyzed local field potentials (LFPs) recorded during a working memory task. These were obtained using high resolution, multi-electrode arrays and allow one to capture details of neural activity at the microscopic level. During the task, the animals, were shown a dot in one of six positions on the edge of a screen that would then go blank. After the delay period, the animals saccaded to the position they just saw marked. Using deep neural networks and biophysical modeling, we obtained the latent space associated with each memory. This allowed us to reconstruct the effective connectivity between different neuronal populations within the patch. Using a dipole model from electromagnetism, we predicted the electric field.
Results
We consider electric fields generated by cell assemblies. We show that they are more stable and reliable than neural activity. Fields appear to contain more information and to vary less across trials where the same memory was maintained. We here suggest that stability underlying memory maintenance is achieved at the level of the electric field. This is ‘above’ the brain, but still ‘of’ the brain. The field could direct the activity of participating neurons.
Discussion
Our analyses suggest that electric fields generated by neurons are causal down to the level of the cytoskeleton. Ephaptic coupling organizes neural activity, forming neural ensembles and low dimensional representations at the macroscale level. We suggest that this can go all the way down to the molecular level to stabilize and tune the cytoskeleton for efficient information processing. We call this the Cytoelectric Coupling hypothesis.



Acknowledgements
This work is supported by UKRI (ES/T01279X/1), Office of Naval Research (N00014-22-1-2453), The JPB Foundation, and The Picower Institute for Learning and Memory.
References
Pinotsis, D. A., & Miller, E. K. (2022). Beyond dimension reduction: Stable electric fields emerge from and allow representational drift. NeuroImage, 253, 119058.


Pinotsis, D. A., & Miller, E. K. (2023). In vivo ephaptic coupling allows memory network formation. Cerebral Cortex, 33(17), 9877-9895.


Pinotsis, D. A., Fridman, G., & Miller, E. K. (2023). Cytoelectric coupling: Electric fields sculpt neural activity and “tune” the brain’s infrastructure. Progress in Neurobiology, 226, 102465.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P243: Hierarchical fluctuations scales in whole-brain resting activity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P243 Hierarchical fluctuations scales in whole-brain resting activity

Adrián Ponce-Alvarez1,2,3*

1Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain.
2Institut de Matemàtiques de la UPC - Barcelona Tech (IMTech), Barcelona, Spain.
3Centre de Recerca Matemàtica, Barcelona, Spain.


*Email :adrian.ponce@upc.edu
Introduction

Brain activity fluctuates at different timescales across regions, with higher-order areas exhibiting slower dynamics than sensory regions [1]. Connectivity and local properties shape this hierarchy: spine density and synaptic gene expression gradients correlate with timescales [2–4], while strongly connected regions exhibit slower dynamics [5].
Beyond temporal features, signal variability has been linked to aging [6], brain states [7], disorders [8], and tasks [9]. However, whether spontaneous activity variance is hierarchically organized remains unknown.
This work analyses the relation between timescales, variances, and connectivity using human f/dMRI data, while exploring the mechanisms through connectome-based whole-brain models.
Methods
Publicly available data from the Human Connectome Project was used, consisting of connectome matrices and resting-state (rs) fMRI signals from 100 subjects across 3 parcellations. For each ROI, the average variance of the rs-fMRI signal, the node’s strength of the connectome, and the autocorrelation function (ACF) were calculated.

To model the variance and temporal scales of resting-state fluctuations, two commonly used whole-brain models were studied here, namely the Hopf and the Wilson-Cowan models. These models use the brain’s connectome to coupled local nodes displaying noise-driven oscillations, with intrinsic dynamics either homogeneous or constrained by the T1w/T2w macroscopic gradient.
Results
Results show that while more connected brain regions have longer timescales, their activity fluctuations exhibit lower variance. Using the Hopf and Wilson-Cowan models, we found that variance and timescales can oppositely relate to connectivity within specific model’s parameter regions, even when all nodes have the same intrinsic dynamics —but also when intrinsic dynamics are constrained by the myelinization-related macroscopic gradient. These findings suggest that connectivity and network state alone can explain regional differences in fluctuation scales. Ultimately, timescale and variance hierarchies reflect a balance between stability and responsivity, with faster, greater responsiveness at the periphery and robustness at the core.
Discussion
This study shows that the variance of fluctuations is hierarchically organized but, in contrast to timescales, it decreases with structural connectivity. Whole-brain models show that the hierarchies of timescales and variances jointly emerge within specific parameter regions, indicating a state-dependence that could serve as a biomarker for different behavioral, vigilance, or conscious states, and neuropsychiatric disorders Finally, in line with previous works on principles of core-periphery network structures [10–12], these hierarchies link to the responsivity of different network parts, with greater and faster responsiveness at the network periphery and more stable dynamics at the core, achieving a balance between stability and responsiveness.



Acknowledgements
A.P-A. is supported by the Ramón y Cajal Grant RYC2020-029117-I funded by MICIU/AEI/10.13039/501100011033 and "ESF Investing in your future". This work is supported by the Spanish State Research Agency, through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M).
References
1.https://doi.org/10.1038/nn.3862
2.https://doi.org/10.1093/cercor/bhg093
3.https://doi.org/10.1016/j.neuron.2015.09.008
4.https://doi.org/10.1038/s41593-018-0195-0
5.https://doi.org/10.1162/netn_a_00151
6.https://doi.org/10.1523/JNEUROSCI.5641-10.2011
7.https://doi.org/10.1098/rsif.2013.0048
8.https://doi.org/10.1371/journal.pcbi.1012692
9.https://doi.org/10.1523/JNEUROSCI.2922-12.2013
10.https://doi.org/10.1038/nrg1471
11.https://doi.org/10.1093/comnet/cnt016
12.https://doi.org/10.1098/rstb.2014.0165
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P244: Multi-network Modeling of Parkinson’s Disease: Bridging Dopaminergic Modulation and Vibrotactile Coordinated Reset Therapy
Tuesday July 8, 2025 17:00 - 19:00 CEST
P244 Multi-network Modeling of Parkinson’s Disease: Bridging Dopaminergic Modulation and Vibrotactile Coordinated Reset Therapy

Mariia Popova*1, Fatemeh Sadeghi2, Simone Zittel2, Claus C Hilgetag1,3

1Institute of Computational Neuroscience, Hamburg Center of Neuroscience, University Medical Center Hamburg-Eppendorf (UKE), Hamburg University, Hamburg, Germany
2Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
3Center for Biomedical AI, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

*Email: m.popova@uke.de
Introduction

Computational models of Parkinson’s disease (PD) play an important role in understanding the complex neural mechanisms underlying motor symptoms, such as tremor, and in assessing novel treatment interventions. According to the “finger-dimmer-switch (FDS)” theory, tremor originates within the basal ganglia–thalamo-cortical (BTC) network, but subsequently spreads to the cerebello–thalamo-cortical (CTC) network through excessive inter-network synchronization [1]. One approach to manage severe PD tremor is to use deep brain stimulation (DBS). Recently, a new non-invasive approach of vibrotactile coordinated reset (vCR) stimulation was proposed as an alternative to DBS [2]. Here, we aimed to explore how vCR affects tremor in a computational model.
Methods
Building on the FDS, we developed a multi-network FDS model encompassing 700 neurons across 11 regions within the BTC, CTC, and thalamic networks. By modulating dopaminergic synaptic connections, we simulated the transition from a healthy state to a Parkinsonian state. Further adjustments of self-inhibition in thalamic nuclei drove tremor onset and offset.
Results
As hypothesized, dopaminergic restoration significantly reduced tremor amplitude and reinforced the thalamus as a pivotal hub for stabilizing neuronal activity. Next, we incorporated a variant of the model featuring spike-timing-dependent plasticity (STDP) to investigate vCR stimulation, a noninvasive therapy that applies patterned tactile pulses to disrupt pathological network synchronization. In line with previous theoretical findings, our simulations showed that vCR not only attenuated excessive beta-band oscillations but also unlearned maladaptive plasticity via STDP, suggesting a broader corrective effect on dysfunctional motor circuitry than dopaminergic interventions alone.
Discussion
These findings highlight the capacity of in silico models to guide therapeutic strategies, demonstrating that vCR may be of use in managing PD symptoms. Consequently, the parameter specifications of vCR should be investigated further in theoretical and clinical studies, as it may reduce patients’ reliance on pharmacological and surgical treatments.



Acknowledgements
This study was funded by the EU project euSNN (MSCAITN-ETN H2020-860563) and Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—SFB 936—Project-ID 178316478-A1/Z3.
References
[1]https://doi.org/10.1016/j.nbd.2015.10.009
[2]https://doi.org/10.4103/1673-5374.329001


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P245: Computational modeling of neural signal disruptions to predict multiple sclerosis progression
Tuesday July 8, 2025 17:00 - 19:00 CEST
P245 Computational modeling of neural signal disruptions to predict multiple sclerosis progression

Vishnu Prathapan*1, Peter Eipert1, Markus Kipp2, Revathi Appali3,4, Oliver Schmitt1,2
1Medical School Hamburg University of Applied Sciences and Medical University, Am Kaiserkai 1, 20457, Hamburg, Germany
2Department of Anatomy, University of Rostock Gertrudenstr 9, 18057, Rostock, Germany
3Institute of General Electrical Engineering, University of Rostock, Albert-Einstein-Straße 2, 18059, Rostock, Germany

4Department of Aging of Individuals and Society, Interdisciplinary Faculty, University of Rostock, Universitätsplatz 1, 18055, Rostock, Germany
*vishnupratapan@gmail.com

Introduction
A computational approach is proposed to overcome the limitations of existing methods in predicting Multiple Sclerosis (MS) progression. MS is marked by myelin sheath disruption, impairing neuronal signal transmission and leading to neurodegeneration and functional decline. Predicting MS progression is challenging due to disease heterogeneity, limited longitudinal data, small sample sizes, and data inconsistencies. Current models rely on static biomarkers, failing to capture dynamic interactions between immune responses, neurodegeneration, and remyelination. Furthermore, the absence of personalized models and challenges in integrating multimodal data hinder early intervention and treatment optimization [1].
Methods
This study analyzes dynamic network changes, in response to localized disturbances, offering deeper insights into MS disease progression. The Izhikevich neuron model [2] is used for its computational efficiency, scalability, and ability to simulate diverse neuronal firing patterns relevant to specific brain regions. A myelin-based delay quotient adapted based on prior research [3, 4], models demyelination and remyelination effects observed in MS. The model is validated using varied conduction values, connection weights, and nodal lengths in a three-node configuration before extending to complex networks. Finally, interconnected neuronal modules representing distinct brain regions are simulated to replicate MS conditions.
Results
Signal propagation patterns are analyzed by altering myelin-based conduction delay parameters at specific nodes, with results compared against a control model. As expected, conduction deficits significantly impact network dynamics, illustrating how neuronal signaling adapts to disease-induced disruptions.
Discussion
This model could provide insights into MS progression by capturing evolving network disruptions when applied to a connectome. This computational approach holds promise as a foundation for predictive clinical tools, supporting early diagnosis and treatment strategies. This study offers a novel perspective on MS progression and potential therapeutic interventions by integrating dynamic network modelling with biological mechanisms.




Acknowledgements
The authors thank the University of Rostock, and the Medical School Hamburg University of Applied Sciences and Medical University for institutional support.
References
1. Prathapan, V., Eipert, P., Wigger, N., Kipp, M., Appali, R., & Schmitt, O. (2024). Modeling and simulation for prediction of multiple sclerosis progression: A review and perspective. Computers in Biology and Medicine, 108416.
2. Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on neural networks, 14(6), 1569-1572.
3. Kannan, V., Kiani, N. A., Piehl, F., & Tegner, J. (2017). A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis. Mathematical Biosciences, 289, 1-8.
4. https://doi.org/10.1371/journal.pcbi.1010507
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P246: A Computational Pipeline for Simulating Mouse Visual Cortex Microcircuits with Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P246 A Computational Pipeline for Simulating Mouse Visual Cortex Microcircuits with Spiking Neural Networks

Margherita Premi*1, Carlo Andrea Sartori1, Giancarlo Ferrigno1, Alessandra Pedrocchi1, Fiorenzo Artoni1, Alberto Antonietti1
1NeuroEngineering and Medical Robotics Laboratory - Department of Electronics, Information and Bioengineering - Politecnico di Milano, Milan, Italy
*E-mail: margherita.premi@polimi.it

Introduction
To integrate in vitro methodologies with in silico techniques, to investigate brain development and neural circuit interactions, we are preparing a computational pipeline to recreate Brain-on-Chip [1] systems with spiking neural networks. We leveraged the MICrONs dataset [2], which provides detailed reconstructions of neurons and astrocytes, with their connections, in a cubic millimeter of mouse visual cortex. The dataset presents significant challenges for computational modeling, particularly regarding quality and quantity of the automatically identified synapses. In this work, we establish a pipeline for transforming raw data into functional spiking neural networks that accurately represent cortical microcircuits.

Methods
The MICrONs dataset showed critical limitations: insufficient synapses and incorrect morphological attributions.
Two solutions were implemented:

● Synapse enhancement through cloning, generating a cluster of synapses placed in a sphere centered on the original synapse. The new synapses are validated through layer densities analyses [3].
● Improve synapse attribution using proofread astrocytes to establish connectivity patterns for non-proofread cells. For neurons, templates from proofread synapses will serve as models for non-proofread neurons.


The framework incorporates layer-specific connectivity with bidirectional astrocyte-neuron interactions. Comparisons were made with networks having the same neurons but different connectivity [4].

Results
Our synapse enhancement method generated clusters of 10 synapses placed in spheres with 10 μm radius, centered on original synapses. This successfully increased the overall synapses count while maintaining layer-specific patterns. A geometric approach was developed that defines minimum ellipsoidal domains containing all synapses belonging to each proofread astrocyte [5]. These ellipsoid representations served as spatial patterns for non-proofread astrocytes. For neurons, template-based attribution from proofread synapses increased the accuracy of connection identification.
Layer-specific connectivity analysis demonstrated that our reconstructed network successfully preserved the characteristic connection patterns across cortical layers (Fig1).
Discussion
This work addresses the identified limitations in using the MICrONs dataset. The developed methods correct connectivity data, enabling more accurate modeling of cortical microcircuits. The approach preserves connections and layer-specific organization unique to the MICrONs dataset. This network is then imported and simulated as a spiking neural model to generate biologically realistic activity. This framework also allows testing alternative network architectures (e.g., random, small-world, etc) compared to the accurate structural connectivity. Future work will refine astrocyte-neuron interaction models. These methodologies could then be applied to BoC experimental data, further validating the computational approaches.




Figure 1. Fig. 1: A. Enhanced synaptic density distribution across cortical depth. B. Astrocyte influence zones represented as ellipsoidal regions, each containing associated synapses. C. Functional connectivity diagram of the reconstructed microcircuit showing layer-specific connections and bidirectional signaling with astrocytes.
Acknowledgements
This work is part of the Extended Partnership "A multiscale integrated approach to the nervous system in health and disease" (MNESYS), funded by the European Union - Next Generation EU under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 1.4, Project PE00000006, CUP E63C22002170007, Spoke 3 "Neuronal Homeostasis and brain-environment interaction".
References
1. https://doi.org/10.1063/5.0121476
2. https://doi.org/10.1101/2021.07.28.454025
3. https://doi.org/10.1523/JNEUROSCI.0090-23.2023
4. https://doi.org/10.1101/2024.11.18.624135
5. https://github.com/rmsandu/Ellipsoid-Fit
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P247: Basic organization of spinal locomotor network derived from hindlimb design and locomotor demands
Tuesday July 8, 2025 17:00 - 19:00 CEST
P247 Basic organization of spinal locomotor network derived from hindlimb design and locomotor demands

Boris I. Prilutsky*1, S. Mohammadali Rahmati#1, Sergey N. Markin2, Natalia A. Shevtsova2, Alain Frigon3, Ilya A. Rybak2, Alexander N. Klishko#1

1School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
2Department of Neurobiology and Anatomy, Drexel University, Philadelphia, PA, USA
3Department of Pharmacology-Physiology, University de Sherbrooke, Sherbrooke, QC, Canada

*Email: boris.priutsky@ap.gatech.edu
#These authors contributed equally to this work


Introduction

One of the core principles of sensorimotor physiology is that the musculoskeletal system and its neural control have coevolved to satisfy behavioural demands. Therefore, it may be possible to derive the organization of the neural control of a motor behaviour (e.g., locomotion) from its mechanical demands and properties of the musculoskeletal system. The goals of this study were to (1) determine activity patterns of cat hindlimb muscles from locomotor demands of walking, (2) determine muscle synergies from the predicted and recorded muscle activity patterns and (3) propose a spinal locomotor network organization based on the derived muscle synergies.

Methods
We defined locomotor demands as patterns of resultant moments of force at hindlimb joints generating walking kinematics. To determine the locomotor demands, we computed the resultant muscle moments (using motion capture and methods of inverse dynamics) and muscle activations producing the moments and minimizing muscle fatigue using optimization. We then derived muscle synergies using the non-negative matrix factorization from the computed and recorded activities. We constructed a rhythm generation and pattern formation network of a spinal central pattern generator (CPG) from the derived muscle synergies and incorporated it into our neuromechanical model of spinal hindlimb locomotion.

Results
Locomotor activity patterns of hindlimb muscles obtained from hindlimb musculoskeletal properties and locomotor demands demonstrated a close agreement with the recorded activity patterns. Muscle synergies and their activation patterns derived from the predicted and measured hindlimb muscle activations were similar and consisted of two flexor and three extensor synergies. We used the revealed muscle synergies to construct a spinal CPG and incorporated it into a neuromechanical model of cat hindlimb locomotion. Computer simulations of locomotion demonstrated realistic locomotor mechanics and activity patterns.

Discussion
We demonstrated that hindlimb musculoskeletal properties and locomotor demands (desired resultant joint moments and minimization of muscle fatigue) can predict hindlimb muscle activation patterns, muscle synergies and a general organization of the CPG. The predicted and recorded muscle activations had the following features: (i) reciprocal activation of antagonists, (ii) concurrent activation of agonists and (iii) dependence of activity of two-joint muscles on functional demands. These muscle activation features are typical for many motor reflexes, automatic and highly skilled motor behaviours and suggest that all these behaviours minimize muscle fatigue and have a common organization of spinal circuitries.




Acknowledgements
This work was supported by US National Institutes of Health grants HD032571 and NS110550.
References
No references
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P248: The Neuron as a Self-Supervised Rectified Spectral Unit (ReSU)
Tuesday July 8, 2025 17:00 - 19:00 CEST
P248 The Neuron as a Self-Supervised Rectified Spectral Unit (ReSU)

Shanshan Qin*1, Joshua Pughe-Sanford1, Alex Genkin1,Pembe Gizem Özdil2, Philip Greengard1, Dmitri B. Chklovskii*1,3

1Center for Computational Neuroscience, Flatiron Institute, New York City, United States
2EDRS Doctoral Program, Swiss Federal Institute of Technology Lausanne(EPFL), Lausanne, Switzerland
3Neuroscience Institute, New York University Grossman School of Medicine, New York City, United States

*Email: qinss.pku@gmail.com; mitya@flatironinstitute.org


Introduction
Advances in synapse-level connectomics [1, 2, 3] and neuronal population activity imaging [4] necessitate neuronal models capable of integrating effectively with these rich datasets. Ideally, these models should offer greater biological realism and interpretability than rectified linear unit (ReLU) networks trained via error backpropagation [5], while avoiding the complexity and parameter intensity of detailed biophysical models[6]. Here, we propose a self-supervised multi-layer neuronal network employing identical learning rules across layers, progressively capturing more complex and abstract features, similar to the Drosophila visual system for which both neuronal responses and connectomics data are available.


Methods
We introduce the Rectified Spectral Unit (ReSU), a neuron model that rectifies projections of its input onto a singular vector of the whitened covariance matrix between past and future inputs. Representing singular vectors corresponding to the largest singular values in each layer effectively maximizes predictive informa- tion [7, 8]. We construct a two-layer ReSU network trained self-supervisedly on translating natural scenes. Inspired by the Drosophila visual system, each first-layer neuron receives input exclusively from one pixel [1], while second-layer neurons integrate inputs potentially from the entire first-layer population. Post-training, we compare the network’s neuronal responses and synaptic weights with empirical results.
Results
First-layer ReSU neurons learned temporal filters closely matching responses observed in Drosophila visual neurons: specifically, the first singular vector matched the linear filter of the L3 neuron, while the second singular vector corresponded to linear filters of L1 (ON) and L2 (OFF) neurons [9]. Additionally, these learned filters adapted their shapes according to signal-to-noise ratios, consistent with experimental find- ings [10]. Second-layer ReSUs aligned with the second singular vector developed motion-selective responses analogous to Drosophila T4 cells [11], and the synaptic weights learned by these neurons closely resembled those documented in T4 connectomic data [2] (Fig.1).
Discussion
ReSU networks exhibit significant advantages, including simplicity, robustness, interpretability, and biolog- ical plausibility. Rectification within ReSUs functions as a form of dynamic clustering, enabling transitions between distinct linear dynamical regimes. Our findings indicate that self-supervised multi-layer ReSU net- works trained on natural scenes faithfully reproduce critical aspects of biological sensory processing. Conse- quently, our model provides a promising foundation for large-scale, interpretable simulations of hierarchical sensory processing in biological brains.



Figure 1. (a)Neurons learn to predict future input and output the (rectified) latent variable. (b)The fly ON motion detection pathway. (c)Responses of neurons to stepped luminance stimulus. (d)T4 response to a moving grating. (e)Temporal filters adaptation to the input SNR. (f)Spatial filter obtained by SVD of L1-L3 output approximates the weights of synapses impinging onto T4a in Drosophila (b).
Acknowledgements
We thank Charles Epstein, Anirvan M. Sengupta and Jason Moore for helpful discussion.
References
[1]https://doi.org/10.1016/j.cub.2013.12.012
[2]https://doi.org/10.7554/eLife.24394
[3]https://doi.org/10.1016/j.cub.2023.09.021
[4]https://doi.org/10.7554/eLife.38173
[5]https://doi.org/10.1038/s41586-024-07939-3
[6]https://doi.org/10.1371/journal.pcbi.1006240
[7]https://doi.org/10.1109/ISIT.2006.261867
[8]Chechik, D., Globerson, A., Tishby, N., & Weiss,Y.(2005).Information Bottleneck for Gaussian Variables. Journal of Machine Learning Research, 6, 165–188
[9]https://doi.org/10.7554/eLife.74937
[10]https://doi.org/10.1098/rspb.1982.0085
[11]https://doi.org/10.1146/annurev-neuro-080422-111929
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P249: Brain-wide calcium imaging in zebrafish and generative network modelling reveal cell-level functional network properties of seizure susceptibility
Tuesday July 8, 2025 17:00 - 19:00 CEST
P249 Brain-wide calcium imaging in zebrafish and generative network modelling reveal cell-level functional network properties of seizure susceptibility

Wei Qin*1, Jessica Beevis2, Maya Wilde2, Sarah Stednitz1, Josh Arnold2, Itia Favre-Bulle2, Ellen Hoffman3, Ethan K. Scott1 


1 Department of Anatomy and Physiology, University of Melbourne, VIC, Australia
2 Queensland Brain Institute, University of Queensland, QLD, Australia
3 Department of Neuroscience, Yale School of Medicine, Yale University, New Haven, CT, USA


*Email: wei.qin@unimelb.edu.au

Introduction

Epilepsy causes recurrent seizures, but the exact mechanisms are still unclear. Traditional methods using data from primates or rodents struggle to resolve individual cell activity while tracking whole-network dynamics. Capturing the interactions of individual neurons within brain-wide networks could greatly enhance our understanding. Zebrafish, which share genetic and physiological similarities with humans, can exhibit seizure-like behaviors when exposed to drugs like PTZ, which blocks inhibitory GABAergic signaling and induces hyperexcitability [2]. Zebrafish and calcium imaging enable simultaneous in-vivo recording of neuronal activity across the brain at cellular resolution, offering a valuable approach to studying epilepsy [1].

Methods
In-vivo light-sheet calcium imaging was used to capture brain-wide cellular-resolution Calcium fluorescent data from wildtype andscn1lab(a gene implicated in Dravet Syndrome) mutant zebrafish larvae [3]. We conducted this under both baseline and PTZ conditions. Through network analyses, we statistically quantified differences in network topology and dynamics between the two genotypes. We focused on the network of active neuronal cells involved in ictogenesis at microscopic and macroscopic scales. Additionally, we developed a Generative Network Model [4] (GNM, Fig. A, Eq. 1) to explain the wiring principles governing both genotypes and the impact of thescn1labmutation on the brain-wide functional network.
Results
Our study reveals significant changes in brain network connectivity, showing thatscn1labmutations impact brain structure and function. The GNM at the cellular level explains the wiring principles governing the development of both genotypes (Fig. B) and the effects of PTZ on the brain-wide network. The model predicts genotypes and seizure severities for each fish before any seizure activities. This novel model also highlights brain regions associated with genotype differences (Fig. C, D), seizure severity, and overall network excitability. Combining experimental data and mathematical modeling, our approach offers a novel perspective on epileptogenesis mechanisms at a depth and resolution that traditional studies cannot achieve.
Discussion
Our study shows thatscn1lab-/-zebrafish larvae have significant brain morphology changes and increased PTZ-induced seizure susceptibility. Their network architecture mirrors PTZ-treated networks' wiring principles. Brain-wide, cellular-resolution activity data revealed notable alterations in baseline functional wiring, and PTZ administration affected network properties differently inscn1lab-/-and WT larvae, highlighting divergent neural responses. The GNM elucidated specific brain regions where the habenula, pallium, and cerebellum in Dravet Syndrome shows how multiple brain regions are affected, with the habenula influencing seizure initiation and the cerebellum regulating excitatory-inhibitory balance.




Figure 1. A. Generative network modelling (GNM) simulates wiring principles, evaluated by KS similarity. B. The model accurately classifies and predicts genotypes without relying on phenotypes. C. It assesses the contribution of each region to correct classification at each PTZ stage. D. The pallium and habenula are identified as the main contributors to the classification.
Acknowledgements
The authors would like to thank the UQBR aquatics team for maintenance of fish stocks. This project is supported by NHMRC, ARC, Simons Foundation and NIH (US).
References1. https://doi.org/10.1007/978-94-007-2888-2_40 2. https://doi.org/10.1371/journal.pone.0054166 3. https://doi.org/10.1093/braincomms/fcae135 4. Hills, T. T. (2024). Generative Network Models and Network Evolution. In: Behavioral Network Science: Language, Mind, and Society (pp. 46-60). Cambridge University Press.
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P250: A computational study of the influence of circadian adaptive mechanisms on signal processing in retinal circuits
Tuesday July 8, 2025 17:00 - 19:00 CEST
P250 A computational study of the influence of circadian adaptive mechanisms on signal processing in retinal circuits

Laetitia Raison-Aubry*1, Nange Jin2, Christophe P. Ribelayga2, Laure Buhry1

1Université de Lorraine, CNRS, LORIA, F-54000 Nancy, France
2Vision Sciences, University of Houston, Houston, Texas, United-Stats

*Email: laetitia.raison-aubry@loria.fr

Introduction

Rod-mediated signals reach retinal ganglion cells (RGCs) via three major pathways with distinct sensitivities and operating ranges [1,2,3]. These pathways interact with the cone pathway to ensure seamless processing over >9 log units of light intensity [1]. Gap junctions (GJs) between rod and cone terminals, the entry point of the secondary rod pathway (SRP), exhibit circadian plasticity--stronger at night--directly modulating rod signal flow into cones, and thereby SRP influence on retinal output [4,5]. However, experimentally isolating this effect is challenging due to the non-specificity of pharmacological interventions. Biophysical modeling provides a precise and reversible alternative to selectively manipulate rod/cone coupling while preserving other synaptic conductances. Using a recent mathematical model of a retinal microcircuit [6], we investigate how circadian modulation shapes rod and cone signal integration.
Methods
Our simulated network consists of ~40,000 retinal cells presynaptic to a single transient OFF alpha (tOFF a) RGC [6], arranged on a circular grid approximating the RGC’s receptive field [7] and interconnected with >100,000 synaptic connections, including chemical and electrical synapses. Each retinal cell type is implemented using conductance-based models that follow the Hodgkin-Huxley formalism. Light-induced photocurrent waveforms, whose amplitude and kinetics vary nonlinearly with stimulus intensity [8], serve as input stimuli [6].
Measurements of transjunctional conductance between adjacent mouse rod/cone pairs reveal dynamic changes, ranging over 1000 pS [4,9]. To simulate circadian modulation of rod/cone coupling, we define three states for the GJ channels conductance: uncoupled (0 pS), resting/dark-adapted (300 pS), and maximally coupled (1,200 pS), in line with experimental data [4,9]. Simulations are conducted using Brian 2 [10].
Results
To evaluate the impact of circadian adaptation on retinal signal processing and RGC light responses, we compare normalized intensity-response profiles of the tOFF aRGC across rod/cone coupling states. Stimulus intensity spans the activation threshold of the primary (0.01 R*/rod/s) to the tertiary (60 R*/rod/s) rod pathways [3]. We find that, relative to the SRP resting dark-adapted range (1-60 R*/rod/s) [3], inhibiting rod/cone coupling lowers the sensitivity threshold by ~0.5 log unit, while increasing coupling shifts the tOFF aRGC activation threshold ~1 log unit to the right.
Discussion
Our results support a circadian shift in the threshold and relative contribution of the SRP to the retinal output. This computational approach circumvents experimental limitations, allowing precise investigation of rod/cone coupling modulation. By clarifying mechanistic links between circadian modulation and retinal sensitivity, we demonstrate that our model can be used as a theoretical framework to reconcile previous experimental inconsistencies [5].



Acknowledgements
Research in the Ribelayga lab is supported by National Institutes of Health Grants EY032508, EY029408, and MH127343, National Institutes of Health Vision Core Grant P30EY007551, and The Foundation for Education and Research in Vision (FERV).
References
● https://doi.org/http://dx.doi.org/10.1016/S1350-9462(00)00031-8
● https://doi.org/https://doi.org/10.1146/annurev.physiol.67.031103.151256
● https://doi.org/https://doi.org/10.1126/sciadv.aba7232
● https://doi.org/https://doi.org/10.1126/sciadv.abm4491
● https://doi.org/10.1016/j.preteyeres.2022.101119
● https://doi.org/https://doi.org/10.1109/NER52421.2023.10123863
● https://doi.org/https://doi.org/10.1371/journal.pone.0180091
● https://doi.org/10.1113/jphysiol.2014.284919
● https://doi.org/https://doi.org/10.7554/eLife.73039
● https://doi.org/https://doi.org/10.7554/eLife.47314


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P251: Second-order neural field models capture the power spectrum and other nonlinear features of EEG signals in an interval timing task
Tuesday July 8, 2025 17:00 - 19:00 CEST
P251 Second-order neural field models capture the power spectrum and other nonlinear features of EEG signals in an interval timing task

IanD.Ramsey1andRodicaCurtu1,2,*

1DepartmentofMathematics,UniversityofIowa,IowaCity,USA
2IowaNeuroscienceInstitute,UniversityofIowa,IowaCity,USA


*Email:rodica-curtu@uiowa.edu


Introduction
Neuralfieldmodelsofferapowerfulframeworkforunderstandinglarge-scaleneuronaldynamics by encoding the underlying spatiotemporal processes as a system of integrodifferential equations. While early approaches modeled mean membrane potentials with a single quantity, modern methods [1] distinguish between postsynaptic and somatic potentials to provide a more nuanced description of synaptic interactions and their temporal dynamics. For this work, we consider the second-order neural field model (2ndNFM) introduced by Liley et. al. [2] and investigate how model parameters, governing both local activity and long-range connections, affect thetheta-band and alpha-bandpower of multi-leadEEGsignals as reported by [3].

Methods
We propose a novel method for parameter estimation, utilizing recent developments in the characterization of nonlinear stochastic oscillators [4]. We implement the method to study~4Hz rhythms (2-5 Hz band) of EEG recordings that were found to correlate with cognition inParkinson’s disease (PD) [3]. We extract relevant features (e.g., the Q-function; see [4]) from the EEG data of PD patients and of healthy subjects performing an interval timing task [3], according to the algorithm proposed by [5]. We analyze these nonlinear dynamical features for significant differences between the groups, then perform parameter estimation andextendedKalmanfilteranalysis in the 2ndNFM to obtain a model that captures their characteristics.

Results
We extended the results in [3] by analyzing the EEG signals recorded at the central leads C1 to C6. We found relevant changes in the 2-5 Hz frequency band activity for control and PD groups, like previous reports at Cz. Next, we parametrized the 2ndNFM to capture theattenuated2-5 HzrhythmsseeninPD patients, focusing on the functional coupling between a pair of leads placed on theleft (C3) and right (C4) brain hemispheres. We projected the dynamics of each 10-dimensionalsystem of differential equations perEEGchannel in 2ndNFM on a single variable via Q-function analysis [4, 5]. These projections were used for the model parameter estimation. The resulting 2ndNFM accurately fitted thepowerspectrumoftheEEGsignals at C3 and C4.
Discussion
To test the validity of 2ndNFMs for EEGs in an interval timing task, we performed parameter estimation using recordings at two central leads C3, C4. We also measured the performance of other methods [6] that assumed linearization of 2ndNFMs. We found them to fail to accurately fit the power spectrum of EEG signals due to nonlinear distortions. From our Kalman filter analysis, we detected anomalies in the subcortical and long-range inputs to the linear model that are inconsistent with previous assumptions of statistical independence. The nonlinear 2ndNFM parameterized on data-driven features guarantees an accurate fit for the power spectrum of EEG signals and could generate theoretical predictions.




Acknowledgements
This work was funded by The Stanley-UI Foundation Support Organization (R.Curtu) and the Erwin and Peggy Kleinfeld Endowment (I.Ramsey).
References
1. Cook, B.et al.(2022). Neural field models: a mathematical overview and unifying framework.Math. Neuro. and Appl., 2(2):1-67.
2. Liley, D.et al.(2002) A spatially continuous mean field theory of electrocortical activity.Network: Computation in Neural Syst., 13:67-113.
3. Singh, A.et al.(2021)https://doi.org/10.1038/s41531-021-00158-x
4. Perez-Cervera, A.et al.(2023)https://doi.org/10.1073/pnas.2303222120
5. Melland, P., & Curtu, R (2023)https://doi.org/10.1523/JNEUROSCI.1531-22.2023
6.Hartoyo, A.,et al.(2019)https://doi.org/10.1371/journal.pcbi.1006694
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P252: Developments around MOOSE for modeling in Systems Biology and Neuroscience
Tuesday July 8, 2025 17:00 - 19:00 CEST
P252 Developments around MOOSE for modeling in Systems Biology and Neuroscience

Subhasis Ray*1,2, G.V. Harsharani3,4, Anal Kumar3, Ashish Anshuman3, Parita Mehta3, Jayesh Poojari3, Deepa SM3, and Upinder S. Bhalla3,4


1CHINTA, TCG CREST, Kolkata, India
2IAI, TCG CREST, Kolkata, India
3NCBS-TIFR, Bangalore, India
4Centre for Brain and Mind, NCBS-TIFR, Bangalore, India
*Email: ray.subhasis@gmail.com


Introduction

Public databases for neuroscience, including those for connectomes, cell morphologies, and electrophysiological recordings, are accelerating data-driven neuroscience. Tools supporting such databases and standard formats for model and data exchange are critical for maximizing the utility of these resources. MOOSE, the Multiscale Object Oriented Simulation Environment [1], is a stable software for computational modeling and simulation in Systems Biology and Neuroscience. It emphasizes models that span molecular and electrical signaling from synapses to networks. As MOOSE-development emphasized standards and interoperability early on, it is well placed to facilitate the development of biological neural models utilising public model- and data-repositories.
Methods
The core of MOOSE is written in C++ for speed, while its Python API allows integration with the Python ecosystem. Extensive documentation is supplemented with a wide range of tutorials using Python graphics and browser-based 3-D graphics. We use existing Python modules for various model and data description formats to support them in MOOSE, and web frameworks to utilize public APIs of the neuroscience databases. We actively conduct outreach activities and user-research to enhance the user experience and documentation of MOOSE, and workshops for training students and researchers on modeling and simulation in Systems Biology and Computational Neuroscience.
Results
MOOSE covers multiple scales of modeling, from chemical reactions and signaling pathways to large biological neural networks. Currently it supports standard formats like SBML and NeuroML for model description, SWC for morphology, and NSDF for simulated data. It includes Python tools to easily create multiscale models from a library of model components. We are also developing clients for accessing public repositories of model and data, enabling users to seamlessly integrate model components from such sources into their composite models.
Discussion
A major goal of MOOSE is to make biological modeling accessible to students and researchers from diverse backgrounds. Users can seamlessly incorporate published and curated models in their simulation experiments using software tools developed around MOOSE. The new developments in the MOOSE ecosystem will help accelerate data-driven research in Systems Biology and Neuroscience.



Acknowledgements
We thank the Kavli Foundation, and DBT and DST of the Govt. of India, for supporting MOOSE development. NCBS/TIFR receives support from the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4006.


References
● https://doi.org/10.3389/neuro.11.006.2008


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P253: Higher-Threshold Neurons Boost Information Encoding in Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P253 Higher-Threshold Neurons Boost Information Encoding in Spiking Neural Networks

Farhad Razi*1, Fleur Zeldenrust1

1Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands

*Email: farhad.razi@donders.ru.nl


Introduction
The brain exhibits remarkable neural heterogeneity. Studies suggest this boosts performing sequential tasks [1], efficient coding [2], and working memory [3] in artificial neural networks. Specifically, heterogeneity in spike thresholds is shown to improve information encoding by reducing trial-to-trial variability in network responses [4]. However, the mechanisms behind this reduced variability remain unclear. We propose that spike threshold heterogeneity introduces variability in neuronal firing sensitivity, with higher-threshold neurons significantly contributing to enhanced information encoding and reduced variability. Our findings advance understanding of the brain's computational capacities.
Methods
A recurrent spiking network with leaky integrate-and-fire neurons was used (Fig. 1A). Heterogeneity was introduced by varying the width of uniform spike-threshold distributions. The distribution of firing rates was assessed. The dimensionality of network activity was quantified using the participation ratio. An input was applied to a subset of the network. A linear decoder was trained to decode the input from spiking responses of the stimulated subset and the whole network. Information encoding was quantified by comparing root mean square error (RMSE) between the decoded and original input. The decoder, trained on the original input, was used to decode a novel input from network responses, evaluating its generalization to unfamiliar inputs.
Results
Increasing spike threshold heterogeneity enhances network information encoding. Heterogeneity increases firing rate variability and participation ratio, indicating a higher dimensionality of network activity (Fig. 1B), consistent with previous studies [4]. Decoding performance improves with heterogeneity, particularly when using the whole network (Fig. 1C). This enhanced decoding performance is largely carried by neurons that have higher spike thresholds (Fig. 1D). Notably, decoders trained on heterogeneous networks show superior generalization performance on a novel input (Fig. 1E). These results support our hypothesis that heterogeneity yields more robust network-wide information encoding capacities via higher-threshold neurons.
Discussion
Our results highlight heterogeneity's crucial role in the brain's capacity in information encoding. However, heterogeneity may not always be beneficial. Improved encoding could potentially consume neural resources, possibly hindering certain task performances. Future work will investigate how heterogeneity impacts networks trained for specific prediction and decoding tasks to reveal the trade-off between information encoding and processing, identifying task-dependent optimal ranges for the neural heterogeneity. Our findings offer insights into the brain function and can guide the development of efficient, task-adaptive neuromorphic systems, potentially bridging the gap between biological and artificial neural networks.





Figure 1. A, Computational experimental design. B, Network characteristics and heterogeneity. C, RMSE between decoded and original input decreases with heterogeneity. D, Neurons with higher spike thresholds possess larger decoder weights, indicating their heightened role in encoding. E, Decoder generalization on a novel input improves with increasing heterogeneity.
Acknowledgements
This work was supported by Dutch Research Council, NWO Vidi grant VI.Vidi.213. 137.
References
[1]https://doi.org/10.1038/s41467-021-26022-3
[2]https://doi.org/10.1371/journal.pcbi.1008673
[3]https://doi.org/10.1523/JNEUROSCI.1641-13.2013
[4]https://doi.org/10.1073/pnas.2311885121
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P254: Synaptic Population Shapes Form Fingerprints of Brain Areas, Organize Along a Rostro-Caudal Axis
Tuesday July 8, 2025 17:00 - 19:00 CEST
P254 Synaptic Population Shapes Form Fingerprints of Brain Areas, Organize Along a Rostro-Caudal Axis

Martin Rehn*1, Erik Fransén1,2,3

1School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.
2Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden.
3Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden.


*Email: rehn@kth.se
Introduction
Synaptic creation, modification, and removal underpin learning in the brain, balanced against homeostasis. The resultant distributions of synaptic sizes may reflect these processes.
We suggest that theshapesof distributions have biological relevance. This is suggested both empirically, by the prevalence of skewed distributions [1,2] and functionally, as large synapses may have particular importance [3]. We find a low-dimensional descriptor for such shapes. We proceed to explore and contrast brain regions at various spatial scales, and across the lifespan, using our proposed descriptor.Methods
We studied a measure of PSD95, a key postsynaptic protein [4–6] in parasagittal sections of mouse brains [7]. PSD95 correlates with EPSP amplitudes [8–10], spine volumes and synaptic face areas [11]. In contrast to previous work [7,12] we chose a scalar measure per synapse and considered synaptic populations.
We analyzed multiple anatomical levels, in ages ranging from one week to 18 months. Per 100 μm tiles, we computed a profile of the synaptic size distribution comprised of the arithmetic mean, normalized width, robust skewness, robust kurtosis, and synaptic density. Then we applied clustering methods and built a bi-linear model to compactly model variability.Results
Fig. 1 shows three parts of the profile descriptor. All five components differ between brain areas, and also by age. The upper tails of the distributions vary from relatively heavy-tailed (HT) regions, also more skewed, to less heavy-tailed (LT) ones. This is amplified in older animals. Regions in the hindbrain and midbrain tend to the HT-type; forebrain regions, in particular the cortex, and the hippocampus, to the LT-type. Mean intensity and spatial density follow the opposite trend. We thus found that the profiles seem to principally trace the anterior-posterior neuroaxis; our bi-linear and clustering models concur. The structure also parallells gene expression data [13].Discussion
We propose to analyze local brain regions using a fingerprint, a “distronomical signature”, based solely on the collective properties of synaptic distributions. This correlates with known anatomy and gene expressions, but exhibits striking differences in local heterogeneity (Fig. 1), and a rather dramatic evolution over the lifespan. We argue that this reflects underlying processes central to brain function, and that it may serve as a novel tool to characterize regular and perhaps anomalous structure in the brain.
Figure 1. Fig. 1: Global distributional structure. False color representation of three statistical moments, in a three month old individual. Tile size 25 µm x 25 µm. The tiles are color coded by arithmetic mean (red), normalized width (green) and robust kurtosis (blue), clipped at the 5th and 95th percentiles. Anatomical regions can be readily identified.
Acknowledgements
The Swedish Research Council grant no. 2022-01079.
References
1. doi:10.1371/journal.pbio.0030068
2. doi:10.1038/nrn3687
3. doi:10.1016/j.celrep.2022.111383
4. doi:10.1038/24790
5. doi:10.1523/JNEUROSCI.4457-06.2007
6. doi:10.1113/jphysiol.2008.163469
7. doi:10.1126/science.aba3163
8. doi:10.1016/S0092-8674(02)00683-9
9. doi:10.1073/pnas.0608492103
10. doi:10.1016/j.celrep.2021.109972
11. doi:10.1038/s41598-020-70859-5
12. doi:10.1016/j.neuron.2018.07.007
13. doi:10.1126/sciadv.abb3446
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P255: Quantifying Negative Feedback Inhibition in Epilepsy to Assess Excitability
Tuesday July 8, 2025 17:00 - 19:00 CEST
P255 Quantifying Negative Feedback Inhibition in Epilepsy to Assess Excitability

Thomas J Richner1, Nicholas Gregg1, Raunak Singh1, Keith Starnes1, Dora Hermes2, Jamie J Van Gompel3, Gregory A Worrell1,Brian N Lundstrom*1

1. Department of Neurology, Mayo Clinic, Rochester, MN, USA
2. Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
3. Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, USA

*Email:lundstrom.brian@mayo.edu

Introduction
Normal cortical function depends on precisely regulated excitability, which is controlled by a balance of excitation and negative feedback inhibition. Negative feedback inhibition mechanisms, such as spike frequency adaptation (SFA) and short-term synaptic depression (STD), act over multiple timescales to reduce excitability. However, negative feedback inhibition is often difficult to quantify and neglected in neuroscience experiments. We’ve developed a framework for quantifying multiple timescale negative feedback inhibition and are applying it to epilepsy patients undergoing invasive EEG epilepsy monitoring. We also modeled negative feedback inhibition to understand how SFA and STD affect EEG signals.

Methods
Novel electrical stimulation waveforms were delivered to epilepsy patients undergoing stereotactic EEG monitoring. Sinusoidally modulated pulse trains were delivered to cortical sites, varying the envelope period between 2 and 10 seconds (5 Hz carrier frequency). Cortico-cortical evoked potentials (CCEPs) were recorded from nearby electrodes. Negative feedback inhibition was assessed by analyzing the phase difference between the stimulus and the CCEP responses, analogous to our previous research with single unit (1). We created a network model with SFA and STD by extending previous modeling (2,3). We investigated the interaction between SFA and STD using spectral analysis and their stabilizing properties by computing the largest Lyapunov exponent over a range of connectivities.

Results
Across participants, the cortical evoked response showed phase advances of approximately 5–30 degrees across modulation frequencies, consistent with adaptation on multiple timescales. These phase leads appear to be more pronounced in the clinically identified seizure onset zone, suggesting that compensatory negative feedback inhibition is upregulated. A phase lead at a particular frequency is consistent with adaptation (or dampening) at that timescale (2). Our network models showed a nonlinear interaction between SFA and STD, similar to other models (3), which may help maintain a homeostatic level of activity. Further, we found SFA and STD stabilized a wide range of networks onto the edge of chaos.

Discussion

Results suggest that neural mechanisms of feedback inhibition may be assessed at the level of EEG using stimulation-based methods, like sine-modulated CCEPs, or passive methods, such as by comparing changes in spectrograms. We find evidence of multiple timescale adaptation at the level of CCEPs, which may be one way the brain maintains stability. Our computational model suggests that SFA and STD can dynamically rebalance a wide range of networks and that these kinds of mechanism may result in telltale signs on spectrograms.



Acknowledgements
Work was supported by NINDS R01NS129622.
References
References
1.Lundstrom, B. N., Higgs, M. H., Spain, W. J., Fairhall, A. L. (2008). Fractional differentiation by neocortical pyramidal neurons. Nat Neurosci.https://doi.org/10.1038/nn.2212
2.Lundstrom, B. N. (2015). Modeling multiple time scale firing rate adaptation in a neural network of local field potentials. Journal of Comp Neurosci. https://doi.org/10.1007/s10827-014-0536-2

3.Lundstrom, B. N., Richner, T. J. (2023). Neural adaptation and fractional dynamics as a window to underlying neural excitability. PLOS Comp Bio. https://doi.org/10.1371/journal.pcbi.1010527

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P256: Structural and evolutionary insights of the neuropeptide connectome of Caenorhabditis species
Tuesday July 8, 2025 17:00 - 19:00 CEST
P256 Structural and evolutionary insights of the neuropeptide connectome of Caenorhabditis species

Lidia Ripoll-Sánchez*1,2,Itai A. Toker4,Oliver Hobert4,Isabel Beets3,Petra E. Vértes1,2,5,William R. Schafer1,3,5

1MRC Laboratory of Molecular Biology, Cambridge, UK
2Department of Psychiatry, Cambridge University, Cambridge, UK
3Department of Biology, KU Leuven, Leuven, Belgium
4Department of Biological Sciences, Howard Hughes Medical Institute, Columbia University, New York, NY, USA
5co-senior authors

*Email: lsanchez@mrc-lmb.cam.ac.uk


Introduction

Neuropeptides modulate synaptically wired neuronal circuits. This modulation is critical to nervous system function, yet little is known about the structure and function of extrasynaptic signalling networks at a whole-organism level and how that is maintained over evolution.


Methods

To this end, we used single neuron gene expression [1] and deorphanisation data for neuropeptide-activated G-protein coupled receptors [2] to generate a connectome of 92 neuropeptide signalling networks inC. elegans[3].This network defined a connection when the sending neuron expressed a neuropeptide, the receiving neuron expressed the cognate receptor, and both neurons extended overlapping processes.We then used graph theory and machine learning methods to characterise its structural features.




Results

Our analysis on the connectivity pattern revealed a mesoscale structure for the core of the network, splitting it in three groups of neurons that act as highly controlled functional hubs. Notably inside these hubs, we identified a group of neurons that seem to be morphologically and biochemically adapted for neuropeptidergic communication. Furthermore, the co-expression pattern identified autocrine neuropeptidergic connections that may modulate locomotion control and evolutionary conserved intracellular neuropeptide signalling networks that could act as homeostatic regulators of the neuropeptidergic network. This network has a higher connection density than the synaptic and gap junction ones, connecting non-synaptically connected neurons [3].


Discussion

These findings challenge the idea that neuronal communication is primarily synaptic, revealing a dense, decentralised neuropeptide network with functional and structural roles. Additionally, conserved signalling patterns acrossCaenorhabditisspecies highlight the evolutionary significance of neuropeptide connectivity[4].Weexpect that this newly mapped neuropeptide connectomes, their analysis and the interactive website we developed to explore them (nemamod.org) will serve as a prototype for other animals and provide new insight into the structure of neuromodulatory networks in larger brains.





Acknowledgements
This work was funded by the Howard Hughes Medical Institute and the NIH grants RO1 NS039996 & NIH RO1 NS100547 (to OH); the Medical Research Council grant MC-A023-5PB91 (to WRS); a Medical Research Council PhD fellowship (to LRS); the MQ Transforming Mental Health grant MGF17_24 (to PEV); and a postdoctoral fellowship from the Evelyn Gruss Lipper charitable foundation (to IAT).
References
1.https://doi.org/10.1016/j.cell.2021.06.023
2.https://doi.org/10.1016/j.celrep.2023.113058
3.https://doi.org/10.1016/j.neuron.2023.09.043

4.https://doi.org/10.1101/2024.11.23.624988
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P257: Basket cell computational modeling predicts signal filtering of Purkinje cell responses
Tuesday July 8, 2025 17:00 - 19:00 CEST
P257 Basket cell computational modeling predicts signal filtering of Purkinje cell responses

Martina F. Rizza*1,Stefano Masoli1, Teresa Soda1, Francesca Prestori1, Egidio D’Angelo1,2

1Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
2Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: martinafrancesca.rizza@unipv.it


Introduction
Cerebellar basket cells (BC), located in the bottom 1/3 of the molecular layer (ML), play an important role in controlling the activity of Purkinje cells (PC) via inhibitory synaptic transmission. BCs receive excitatory synaptic inputs from parallel fibers (pf) and transmit inhibitory synaptic inputs to BCs and PCs. We reconstructed a multi-compartmental biophysically realistic BC model in Python-NEURON to investigate BC intrinsic and synaptic electrophysiological properties and their impact on PC model responses [1,2,3]. The SC model [4] was included to reconstruct a ML microcircuit. Simulationspredicted that BC and SC operate in tandem, setting the frequency band of PC transmission through the regulation of the PC frequency/response curve.


Methods
Starting from morphological reconstructions taken from cerebellar tissue and patch-clamp recordings, we implemented conductance-basedmulti-compartmental models of BCwith Python 3 and NEURON 8.2[5].The model maximum ionic conductances were tuned to match the firing pattern revealed by whole-cell patch-clamp recordings from mice cerebellar slices.Mouse SC[4] and BC models were connected with a multi-compartmental mouse PC model[1,2,3] to test their impact when stimulated by excitatory synaptic inputs.Simulations were performed on an AMD Threadripper 7980X 64 cores using fixed time step (0.025ms) and temperature set to 32°.

Results
Simulations reproduced whole-cell patch-clamp experimental results, showing autorhythmic activity, an almost linear I/O relationship to positive current injections, pauses generated after positive current injections,sag after the negative current injections,AMPA and NMDA receptors-mediated excitatory postsynaptic responses following pf inputs.SC and BCsimulations showed thefiltering properties on PC activity, highlighting thatBCs modulate low-frequency PC discharges through somatic GABAergic synapses, while SCs act on high-frequency responses through dendritic GABAergic synapses.

Discussion
BC modeling reproduced the cellular intrinsic excitability and the synaptic activity, investigating thefrequency‑dependent short‑term dynamics at pf-BC synapses and the frequency-dependenceof BC input–output gain functions. Simulations predicted BC and SC filtering of PC responses, showing thatthe intensity and bandwidth of ML filtering is modulated by the number of active synapses between pfs-SCs-PCs and pfs-BCs-PCs. SCs and BCs emerge as critical elements controlling cerebellar processing in time and frequency domains. Tuning of transmission bandwidth and delay through specific membrane and synaptic mechanisms contributes to explain the role of SCs and BCs in motor learning and control.






Acknowledgements
This project/research received funding from the European Union’s Horizon Europe Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Framework Partnership Agreement No. 650003 (HBP FPA).
References
1.https://doi.org/10.3389/fncel.2015.00047
2.https://doi.org/10.1038/s42003-023-05689-y
3.https://doi.org/10.3389/fncel.2017.00278
4.https://doi.org/10.1038/s41598-021-83209-w
5.https://doi: 10.3389/neuro.11.001.2009




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P258: Reconstruction and simulation of the mouse cerebellar declive: an atlas-based approach
Tuesday July 8, 2025 17:00 - 19:00 CEST
P258 Reconstruction and simulation of the mouse cerebellar declive: an atlas-based approach

Dimitri Rodarie1*, Dianela Osorio1, Egidio D’Angelo1,2, Claudia Casellato1



1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
2Brain Connectivity Center IRCCS Mondino Foundation, Pavia, Italy
* Email:dimitri.rodarie@unipv.it


Introduction

We aim to reconstruct and simulate atlas-mappedmousecerebellar regions, capturing the relationship between structure, dynamics, and function. Numerous experiments on rodents and humans show that the declive region (lobule VI) plays a relevant role in many functions including motor, cognitive, emotional, and social tasks [1].
We present here a pipeline to reconstruct the mouse declive, based on the Blue Brain Cell Atlas (BBCA) model [2] and the Brain Scaffold Builder (BSB) tool [3]. With this pipeline, we could estimate the specific densities of each cell type. With the BSB we placed, oriented, and connected the neurons. The output of this pipeline is a circuit that can be simulated and validated against experimental findings.

Methods
We built a 3D model of the mouse declive (Fig. 1), based on the BBCA pipeline (Fig. 1DE), which we extended with the Purkinje layer at the boundary between granular and molecular layers (Fig. 1A). We placed cells based on the atlas and regional densities [4,5] and proposed a new strategy to place Purkinje layer cells based on linear density [6] (Fig. 1F). To connect the cells, we computed the orientations and depth [2] of each morphology (Fig. 1BC). These fields are used to bend the cells’ neurites following the declive curvature (Figure 1G). We applied voxel intersection on these bent cells with synaptic in- and out-degree ratios [3]. Finally, we assigned point-neuron electrical parameters to each cell and connection [7].
Results
We combined the workflows of the BBCA and the BSB into a single pipeline. This includes tools to align experimental data into an atlas, to reconstruct and to simulate cerebellar circuits. This allowed us to produce the most detailed model of the mouse declive.
We obtained new densities for each cell type of the cerebellum. Our model shows cell composition differences between cerebellar regions. We also estimated the impact of the declive shape on its local connectivity, by comparing different sub-part of the region with respect to a cubic canonical circuit.
Finally, we simulated our circuit using the BSB interfacing with the NEST simulator [8] in resting state and created a paradigm to reproduce fear conditioning experiments on mice.
Discussion
By combining the two pipelines to reconstruct our circuit, we are now able to leverage atlas data to estimate the spatial cellular composition in the cerebellum. The atlas registration will also facilitate the embedding of our model into larger brain circuits [9].
We also found that the cerebellum's highly parceled layers, its curved shape and its position within the mouse atlas make our model very sensitive to artifacts in the data (Figure 1DE). The model will be refined as more data become available.
We plan to reconstruct different sub regions of the cerebellar cortex to compare their structure and function. Our future work will also involve mapping the different types of Purkinje neurons based on the “zebrin stripes” [10].



Figure 1. Fig 1: Reconstruction pipeline. A. Annotations shown in colors over the Nissl volume. B. Orientation field showing the local axons’ main axis. Colors represent the vectors’ norm. C. Distance to the outside border, following the orientation field. D. E. Neuron and inhibitory neuron density. F. Neuron positions displayed over annotations. G. Scaled and bent Purkinje morphologies over annotations.
Acknowledgements
Funding:

European Union's Horizon 2020 research and innovation program - Marie Sklodowska-Curie - grant 956414 Cerebellum and Emotional Networks

Virtual Brain Twin Project - European Union's Research and Innovation Program Horizon Europe - grant 101137289

National Centre for HPC, Big Data and Quantum Computing - CN00000013 PNRR MUR – M4C2 – Fund 1.4 - decree n. 3138 16 december 2021




References
1.https://doi.org/10.3389/fnsys.2023.1185752
2.https://doi.org/10.1371/journal.pcbi.1010739
3.https://doi.org/10.1038/s42003-022-04213-y
4.https://doi.org/10.1007/s00429-013-0531-9
5.https://doi.org/10.1523/JNEUROSCI.20-05-01837.2000
6.https://doi.org/10.1038/s41593-022-01057-x
7.https://doi.org/10.3389/fncom.2019.00068
8.https://doi.org/10.4249/scholarpedia.1430
9.https://doi.org/10.1523/ENEURO.0111-17.201710.https://doi.org/10.1038/nrn269


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P259: Local multi-gridding for detailed morphology, spines and synapses
Tuesday July 8, 2025 17:00 - 19:00 CEST
P259 Local multi-gridding for detailed morphology, spines and synapses

Cecilia Romaro*1, William W. Lytton2,3,4,5, Robert A. McDougal1,6,7,,8

1Department of Biostatistics, Yale School of Public Health, New Haven, CT, United States
2Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, New York, United States
3Department of Neurology, SUNY Downstate Health Sciences University, Brooklyn, New York, United States
4Department of Neurology, Kings County Hospital Center, Brooklyn, New York, United States
5The Robert F. Furchgott Center for Neural and Behavioral Science, Brooklyn, New York, United States
6Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, CT, United States
7Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT, United States
8Wu Tsai Institute, Yale University, New Haven, CT, United States

*Email: cecilia.romaro@yale.edu

Introduction

The NEURON simulator (https://nrn.readthedocs.io) is one of the most widely-used tools for simulating biophysically detailed neurons and networks [1]. In addition to electrophysiology simulations, NEURON has long supported multi-scale models incorporating intra- and extracellular chemical reaction-diffusion [2], in both 1D and 3D. To accurately simulate whole cells in 3D requires capturing large regions like somas and small regions like spines. We demonstrate an algorithm in NEURON for achieving high-quality results with reasonable computational cost through local multi-gridding.


Methods
We extended NEURON's reaction-diffusion Region specification to support per-Section grid size specification. Sections with different grid sizes are independently discretized using NEURON's standard voxelization algorithm [3]. Small voxels are removed and/or added to produce a join with minimal voxel overlap. Neighboring voxels of different sizes are connected to allow molecules to diffuse between the grids. For ease of use, the model specification is in Python; for performance, coupling between grids and all simulation is done in C++.


Results
Multigrid-voxelization overhead due to the editing and alignment of the grids is small but measurable. Mass is conserved when diffusing across the grid-size boundary, however subtle differences may arise in numerical results due to the changes in volume and surface-area voxel-size-dependent estimates; implications for assessing convergence are discussed. Accuracy and performance are assessed for both simplified morphologies and detailed cell morphologies from NeuroMorpho.Org; initialization and simulation are necessarily slower than for the coarse grid, (but not for the finest grid) however the time cost and accuracy improvements are highly dependent on the problem.

Discussion
Using multiple grid sizes for 3D reaction-diffusion simulation allows increased accuracy in small parts of the morphology or in regions of interest with moderate compute overhead. This approach preserves the regular sampling and easy convergence testing of NEURON's finite-volume integration. This numerical simulation method pairs naturally with ongoing work to import high-resolution neuron spine morphologies into NEURON models, with the spine and the dendrites simulated using different grids. Carefully chosen grid sizes have the potential to enable high-fidelity simulations combining chemical, electrical, and network activity with modest compute resources.




Acknowledgements
This research was funded by the National Institute of Mental Health, National Institutes of Health, grant number R01 MH086638. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
References
1.https://doi.org/10.3389/fninf.2022.884046
2.https://doi.org/10.3389/fninf.2022.847108
3.https://doi.org/10.1016/j.jneumeth.2013.09.011


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P260: Oscillatory activity patterns in a detailed model of the prefrontal cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P260 Oscillatory activity patterns in a detailed model of the prefrontal cortex

Antonio C. Roque*1, Marcelo R. S. Rempel1

1Department of Physics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP, Brazil

*Email: antonior@usp.br

Introduction

The prefrontal cortex (PFC) is a crucial brain region involved in executive functions, behavioral control, and affective modulation. PFC neurons exhibit distinct activity states, including asynchronous irregular firing during wakefulness and slow oscillations with UP/DOWN state transitions during deep sleep and anesthesia [1]. Previous computational models have investigated the mechanisms underlying these states, but many focus on general cortical networks or sensory cortices. This study aims to replicate and extend a detailed PFC network model to explore the conditions leading to oscillatory activity and UP/DOWN transitions.

Methods
A previously published PFC model [2] was reimplemented using Brian2, preserving its original parameters to ensure replication accuracy. Simulations were conducted to compare the original model with three parameter-modified variants. Variant A increased recurrent excitation, inducing hyperactive network fluctuations. Variant B intensified synaptic excitation, resulting in epileptiform-like bursting. Variant C introduced adaptation currents and stochastic external inputs, leading to oscillatory UP/DOWN transitions. Network activity was analyzed through spike raster plots, local field potential (LFP) estimation, and membrane potential dynamics .
Results
The original model exhibited asynchronous irregular firing, consistent with physiological observations of cortical activity under moderate external drive. Variants A and B disrupted excitation-inhibition balance, promoting excessive synchrony. Variant C successfully generated low-frequency oscillations (~8 Hz) with UP/DOWN transitions, influenced by adaptive currents and external noise, mirroring previous findings in cortical dynamics.
Discussion
The results align with established models of cortical bistability and highlight the interplay between adaptation and external drive in shaping oscillatory states.




Acknowledgements
This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). ACR is partially supported by a CNPq fellowship (grant 303359/2022-6).
References
[1] Wang, X. J. (2010). Neurophysiological and computational principles of cortical rhythms in cognition.Physiological Reviews, 90(3), 1195-1268. https://doi.org/10.1152/physrev.00035.2008.
[2] Hass, J., Hertäg, L., & Durstewitz, D. (2016). A detailed data-driven network model of prefrontal cortex reproduces key features ofin vivoactivity.PLoS Computational Biology. 12, e1004930. https://doi.org/10.1371/journal.pcbi.1004930.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P261: Properteis of intermittent synchrony of gamma rhythm oscillations
Tuesday July 8, 2025 17:00 - 19:00 CEST
P261 Properteis of intermittent synchrony of gamma rhythm oscillations

Leonid L Rubchinsky*1,2,Quynh-Anh Nguyen3

1Department of Mathematical Sciences, Indiana University Indianapolis, Indianapolis, IN, USA
2Stark Neurosciences Research Institute, Indiana University School of Medicine, Indianapolis, IN, US
3Department of Mathematical Sciences, University of Indianapolis, Indianapolis, IN, USA


*Email:lrubchin@iu.edu
Introduction

Synchronization of oscillations of neural activity is implied to be important for a variety of neural phenomena. Most of the studies consider time-averaged measures of synchrony such as phase-locking strength. However, if two signals have some degree of phase locking, it is possible to explore synchrony properties beyond the average phase-locking strength and to study whether the oscillations are close to synchronous state or not at any time (during any oscillatory cycle) [1]. Thus, it is possible to characterize temporal patterning of neural synchrony (e.g. many short desynchronizations vs a few long desynchronizations), which may vary independently of the average synchrony strength [2].


Methods
To study how the properties of the temporal variability of synchronized oscillations are affected by the network properties, we consider populations of model neurons exhibiting pyramidal-interneuron gamma rhythm and apply the same time-series analysis techniques for characterization of temporal synchrony patterning as the ones used in the earlier experimental studies [1,2].


Results
Variation of synaptic strength affects the strength of time-average phase-locking between the networks. However, this variation of synaptic strength also affects the temporal patterning of the synchrony, altering the distribution of the durations of the desynchronizations (similar to the earlier studies of minimal models [3,4]). While synaptic strength affects both synchrony level and its temporal patterning, these effects can be independent of each other: the former can be practically fixed, while the latter may vary. Furthermore, the impacts of the long-range and local synapses tend to be the opposite. Shortening the desynchronization durations tends to be achieved with weakening of long-range synapses and strengthening local synapses.


Discussion
Changes in the temporal patterning of the synchronization of oscillations may potentially affect how the networks are processing the external signals [4,5]. Frequent vs. rare switching between synchronized and desynchronized dynamics may lead to functionally different outcomes even though the average synchrony level between the networks is the same. Synaptic strength changes have thus potential to affect the responses of the neural circuits not only via the average synchrony strength, but also via the more subtle changes, such as altering the temporal patterning of synchronized dynamics, pointing to the potential importance of studying these phenomena.




Acknowledgements
References
1. Ahn, S., & Rubchinsky, L. L. (2013). Chaos,23, 013138. https://doi.org/10.1063/1.4794793

2. Ahn, S., Zauber, S. E., Witt, T., et al. (2018).Clinical Neurophysiology, 129, 842-844. https://doi.org/10.1016/j.clinph.2018.01.063

3. Ahn, S., & Rubchinsky, L. L. (2017).Frontiers in Computational Neuroscience, 11, 44. https://doi.org/10.3389/fncom.2017.00044

4. Nguyen, Q. A., & Rubchinsky, L. L. (2021).Chaos, 31, 043133. https://doi.org/10.1063/5.0042451

5. Nguyen, Q. A., & Rubchinsky, L. L. (2024).Cognitive Neurodynamics, 18, 3821-837. https://doi.org/10.1007/s11571-024-10150-9
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P262: Mechanisms of bistability in spinal motoneurons and its regulation
Tuesday July 8, 2025 17:00 - 19:00 CEST
P262 Mechanisms of bistability in spinal motoneurons and its regulation

Ilya A. Rybak*1, Yaroslav I. Molkov2, Thomas Stell2, Florent Krust3, Frédéric Brocard3
1 Department of Neurobiology and Anatomy, Drexel University College of Medicine,
Philadelphia, PA, USA
2 Department of Mathematics and Statistics and Neuroscience Institute, Georgia State
University, Atlanta, GA, USA
3 Institut de Neurosciences de la Timone, Aix Marseille University, CNRS, Marseille, France


*Email: rybak@drexel.edu


Introduction

Spinal motoneurons represent output elements of spinal circuitry that activate skeletal muscles to produce motor behaviors. Firing behavior of many motoneurons is characterized by bistability allowing them to maintain a self-sustained spiking activity initiated by a brief excitation and stopped by a brief inhibition. Serotonin can induce or amplify bistability, influencing motor behaviors. Biophysical mechanisms of bistability involve nonlinear interactions of specific ionic currents. Experimental studies identified ionic currents linked to bistability [1,2]. Using computational modeling, we simulate motoneuronal bistability and analyze the roles of key ionic currents in its generation and regulation.

Methods
We have developed a conductance-based mathematical model of spinal motoneuron to explore and analyze the role of different ionic currents and their interactions in generation and control of motoneuronal bistability under different conditions. The one-compartmental model includes main spike-generating currents, fast sodium (INaF) and potassium rectifier (IKdr), as well as persistent sodium (INaP), slowly inactivating potassium (IKv1.2aka potassium A,IKA), high-voltage activated calcium (ICaL), Ca2+-activated cation non-specific (ICAN), and Ca2+-dependent potassium (IKCa, associated with SK channels) currents. Additionally, the model incorporates the intracellular Ca2+dynamics including calcium-induced calcium release mechanism (CICR).
Results
Our simulations show that bistability in motoneurons relies onICAN, activated by intracellular Ca2+accumulated byICaLand the CICR mechanism. Two other currents play modulatory roles withINaPaugmenting bistability andIKCaattenuating or abolishing it. The interplay betweenICANandIKCashapes the membrane potential dynamics, producing post activation afterdepolarization (ADP) or afterhyperpolarization (AHP), withIKv1.2modulating the membrane potential dynamics. Under certain conditions (such as an elevated extracellular K+concentration),INaPcan sustain bistability independently ofICAN.
Discussion
Our findings clarify the ionic basis of motoneuron bistability, underscoring its reliance on current interactions and external conditions, and offer insights into motor function and potential therapeutic strategies for motor disorders. Our results suggest that serotonin can induce or increase motoneuron bistability by amplifyingICAN(e.g., via increased intracellular Ca2+concentration due to an increasedICaLor via 5-HT3 receptors), activation ofINaPor suppression ofIKCa(both through 5-HT2 receptors).




Acknowledgements
No
References
● Harris-Warrick, R.M., Pecchi, E., Drouillas, B., Brocard, F., & Bos R. (2024). Effect of size on expression of bistability in mouse spinal motoneurons. Journal of Neurophysiology, 131(4), 577-588.https://doi.org/10.1152/jn.00320.2023.
● Bos, R., Drouillas, B., Bouhadfane, M., Pecchi, E., Trouplin, V., Korogod, S.M., & Brocard, F. (2021) Trpm5 channels encode bistability of spinal motoneurons and ensure motor control of hindlimbs in mice. Nature Communication, 12(1), 6815.https://doi.org/10.1038/s41467-021-27113-x.



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P263: Enhancing Neuronal Modeling with a Modified Hodgkin-Huxley Approach for Ion Channel Dynamics
Tuesday July 8, 2025 17:00 - 19:00 CEST
P263 Enhancing Neuronal Modeling with a Modified Hodgkin-Huxley Approach for Ion Channel Dynamics

Batoul M. Saab*1, Jihad Fahs2, Arij Daou1

1Biomedical Engineering Program, American University of Beirut, Lebanon
2Department of Electrical and Computer Engineering, American University of Beirut


*Email: bms28@mail.aub.edu


Introduction
The development of precise physical models is imperative for comprehending and manipulating system behavior. Neuronal firing models serve as a pivotal exemplar of intricate biological modeling, crucial for unraveling neural functionality across both normal cognitive processes and pathological disease states. Achieving accurate dynamical modeling of neuronal firing necessitates the meticulous fitting of model parameters through data assimilation, utilizing experimentally gathered recordings. This endeavor poses significant theoretical challenges due to two primary factors: (a) neuronal action potentials are the aggregate result of active nonlinear dynamics interconnecting various neuronal compartments, parameterized by a multitude of unknown variables, and (b) the stochastic nature of the noisy environmental stimuli influencing neuronal activity.

Methods
In practice, the fitting of a substantial number of parameters is constrained by the scarcity of observable outputs (recording sites), the complexity of the underlying models, and the time-intensive and expensive nature of conducting experiments under controlled conditions [1]. While neurophysiologists are restricted to a limited range of feasible injection current waveforms, we propose herein to investigate the parameter estimation conundrum of model neurons using diverse quality metrics and processing techniques. Our approach involves optimizing a biophysically realistic model for these neurons [2] using intracellular data obtained via the whole-cell patch-clamp technique from basal-ganglia projecting cortical neurons in brain slices of zebra finches.
Results
We proceed with adopting a different approach than that adopted by Hodgkin and Huxley [3] in their seminal work whereby we model the activation functions directly using Hill functions rather than fitting both opening rate constants by exponential functions. Our approach provides additional flexibility and is biologically interpretable. Furthermore, using this modified model, we conduct exhaustive searches on a large subset of the model parameters and test different functional metrics to check which one(s) generate reliable and realistic fits to the biological data.
Discussion
The long-term benefits of this approach include the capability to examine large-scale dynamic phenomena in insightful manners, enhancing model accuracy and streamlining experimentation time. By refining parameter estimation methods and employing biologically interpretable mathematical representations, we aim to improve our understanding of neuronal firing dynamics and provide a robust framework for future computational neuroscience research.






Acknowledgements
This work was supported by the University Research Board (URB) and the Medical Practice Plan (MPP) grants at the American University of Beirut.
References
Reference 1:https://doi.org/10.48550/arXiv.1609.00832
Reference 2:10.1152/jn.00162.2013
Reference 3:10.1113/jphysiol.1952.sp004764
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P264: Paths to depolarization block: modeling neuron dynamics during spreading depolarization events
Tuesday July 8, 2025 17:00 - 19:00 CEST
P264 Paths to depolarization block: modeling neuron dynamics during spreading depolarization events

Maria Luisa Saggio*1, Damien Depannemaecker1, Roustem Khazipov2,3, Daria Vinokurova2, Azat Nasretdinov2, Viktor Jirsa1, Christophe Bernard1


1 Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
2Laboratory of Neurobiology, Institute of Fundamental Medicine and Biology, Kazan Federal University, Kazan, 420008, Russia
3Aix-Marseille University, INMED, INSERM, Marseille, 13273, France

*Email: maria-luisa.saggio@univ-amu.fr


Introduction
Spreading Depolarization (SD) is a pathological state of the brain involved in several brain diseases, including epilepsy and migraine. It consists of a slowly propagating wave of nearly complete depolarization of neurons, classically associated with a depression of cortical activity. Recent findings challenge this homology [1]: during SD events, which only partially propagate from the cortical surface to depth, neuronal activity may be suppressed, unchanged or elevated. In layers invaded by SD, neurons lose their ability to fire entering Depolarization Block (DB) and far from the SD neurons maintain their membrane potential. However, neurons in between unexpectedly displayed patterns of prolonged sustained firing.

Methods
In the present work [2], we build a phenomenological model, incorporating some key features observed during DB in this dataset (current-clamp patch-clamp recordings from L5 pyramidal neurons in the rat somatosensory cortex during evoked SDs), that can predict the new patterns observed. We model the L5 neuron as an excitable system close to a SNIC bifurcation [3], using the normal form of the unfolding of the degenerate Takens-Bogdanov singularity for the fast dynamics [4], a minimal yet dynamically rich dynamical system. The fast subsystem is modulated by the dynamics of two slow variables, implementing homeostatic and non-homeostatic reactions to inputs.
Results
The model’s bifurcation diagram provides a map for neural activity that includes baseline behavior, sustained oscillations, and DB. We identify five qualitatively different scenarios for the transition from healthy activity to DB, through specific sequences of bifurcations. These scenarios encompass and expand on the mechanisms for DB present in the modeling literature, account for the novel patterns observed in our dataset,and allow us to understand them from a unified perspective. Time series in our dataset are consistent with the scenarios, however, the presence of bistability, distinguishing some of the scenarios, cannot be inferred by our analysis. We further use the model to investigate mechanisms for the return to baseline.
Discussion
Understanding how brain circuits enter and exit SD is important to designing strategies aimed at preventing or stopping it. In this work, we use modeling to gain mechanistic insights into the ways a neuron can transition to DB or different patterns of sustained oscillatory activity during SD events, as observed in our dataset. While our work provides a unified perspective to understanding the modeling of DB, ambiguities remain in the data analysis. These ambiguities could be solved by scenario-dependent theoretical predictions, for example for the effect of stimulation, for further experimental testing.




Acknowledgements
Funded by the Russian Science Foundation grant № 24-75- 10054 to AN (https://rscf.ru/en/project/24-75-10054/) and the European Union grant № 101147319 to MS, DD and VJ.
References
[1] Nasretdinov, A., Vinokurova, D., Lemale, C. L., Burkhanova-Zakirova, G., Chernova, K., Makarova, J., ... & Khazipov, R. (2023). Diversity of cortical activity changes beyond depression during spreading depolarizations. Nature Communications, 14(1), 7729.
[2] Saggio et al (In preparation)
[3] Izhikevich, E. M. (2007). Dynamical systems in neuroscience. MIT press.
[4] Dumortier, F., Roussarie, R., & Sotomayor, J. (1991). Generic 3-parameter families of planar vector-fields, unfoldings of saddle, focus and elliptic-singularities with nilpotent linear parts.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P265: Bifurcations and bursting in the Epileptor
Tuesday July 8, 2025 17:00 - 19:00 CEST
P265 Bifurcations and bursting in the Epileptor

Maria Luisa Saggio*1, Viktor Jirsa1

1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France

*Email: maria-luisa.saggio@univ-amu.fr

Introduction

Large-scale patient-specific models could provide clinicians with an additional tool for the evaluation of the best surgery strategy for drug-resistant epileptic patients, leveraging on the possibility of revealing otherwise hidden complex network and dynamical effects, testing clinical hypotheses, or finding unbiased optimal surgery strategies. One of these frameworks, the Virtual Epileptic Patient (VEP), is currently undergoing validation in a prospective clinical trial involving over 300 patients. It employs the Epileptor [1], a phenomenological mesoscopic model for the most common seizure class, defined in terms of the onset/offset bifurcations marking the transitions between interictal and ictal states are modeled as bifurcations.


Methods
The Epileptor onset/offset bursting class is known in the literature in dynamical systems as square-wave bursting. In this study [2], we utilize insights from a more generic model for square-wave bursting, based on the unfolding of a high codimension singularity, to guide the bifurcation analysis of the Epileptor and gain a deeper understanding of the model and the role played by its parameters. We use analytical methods, numerical continuation of bifurcation curves and model simulations.
Results
We identify a key region in parameter space of topological equivalence between the two models and demonstrate how the Epileptor’s parameters can be modified to produce activities for other seizure classesas predicted by the generic model approach.. Finally, we reveal how the interaction with an additional mechanism for spike-and-wave discharges present in the Epileptor alters the bifurcation structure of the main burster pushing it across a sequence of Supercritical Hopf bifurcations that modulate the oscillatory activity typical of the ictal state.
Discussion
Exploring the full potential of the Epileptor model in terms of bursting dynamics and understanding how to set the parameters to obtain different classes is an important step to (i) enhance our understanding of the model at the core of the VEP framework and (ii) explore the possibility of further personalizing the VEP model. In fact, patients may experience seizures compatible with classes other than square-wave [3]. While the impact of the class on the VEP outcome has not yet been investigated, we know that different classes may exhibit variations in synchronization and propagation properties, warranting further exploration.





Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project).
References
[1]Jirsa, V. K., Stacey, W. C., Quilichini, P. P., Ivanov, A. I., & Bernard, C. (2014). On the nature of seizure dynamics.Brain,137(8), 2210-2230.
[2]Saggio, M. L., & Jirsa, V. (2024). Bifurcations and bursting in the Epileptor.PLOS Computational Biology,20(3), e1011903.
[3] Saggio, M. L., Crisp, D., Scott, J. M., Karoly, P., Kuhlmann, L., Nakatani, M., ... & Stacey, W. C. (2020). A taxonomy of seizure dynamotypes.Elife,9, e55632.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P266: Two distinct bursting mechanisms in cold-sensitive neurons of Drosophila Larva
Tuesday July 8, 2025 17:00 - 19:00 CEST
P266 Two distinct bursting mechanisms in cold-sensitive neurons of Drosophila Larva

Akira Sakurai, Natalia V. Maksymchuk, Sergiy M. Korogod, Daniel N. Cox,Gennady S. Cymbalyuk*
Neuroscience Institute, Georgia State University, Atlanta, GA 30302-5030, USA

*Email: gcymbalyuk@gmail.com


Introduction
In Drosophila larvae, noxious low temperatures are detected by CIII primary sensory neurons lining the inside of the body wall [1,2]. About half of these neurons respond to rapid temperature drops with transient bursts, producing a clear spike-rate peak that likely signals the rapid change. Previously, we developed a biophysical model, which captured various extracellularly recorded cold-evoked CIII responses [2]. Here, having overcome the challenge posed by the small size of CIII neurons and obtained intracellular recordings, we used the waveforms of bursting to identify two distinct types of bursting generated by these neurons.
Methods
We used electrophysiological intracellular and extracellular recordings and modeling to investigate the mechanisms underlying pattern generation by CIII neurons. We upgraded the model [2], by including dynamics of concentrations of Cl-, Na+, and K+, since Ca2+-activated Cl-current (ICaCl) was implicated in CIII dynamics [3]. We investigated the patterns caused by injected current, a drop in extracellular Cl-, and drop of temperature. We also considered a simplified model with an effective (e) leak current including Cl-currents lumped together with Na+and K+leak currents. We map oscillatory and silent regimes under variation of EeLeakand geLeakand compare the model activity to the experimental data.
Results
At ambient temperatures, CIII neurons exhibited a stationary state around -40 mV and sporadic spikes at 1.0 ± 1.3 Hz (N = 20). In the activity of 90% of sporadically spiking neurons, elliptic bursts with an intra-burst spike frequency of 6.0 ± 1.7 Hz were detected. With a temperature drop from 24°C to 10°C, CIII neurons depolarized and spiked at 2.9 ± 1.5 Hz. In 45% of neurons, square-wave bursts with the intra-burst spike frequency 38.2 ± 19.5 Hz were observed. Similar square-wave bursting and high-frequency spiking were induced by direct depolarizing injected currents. Low-Cl⁻conditions induced transitions between patterns of activity dominated by spiking, fast bursting, or slow bursting.
The model represents waveform properties of the experimentally recorded bursting under variation of the injected current, extracellular Cl-, and temperature. We found large parameter domains of silent and spiking regimes at low and high EeLeak, respectively, and a domain of square-wave bursting in an intermediate range of geLeakas EeLeak. In a certain range of geLeakas EeLeakgrows the model transitions from silence to elliptic bursting and then to spiking. These transitions qualitatively map transitions observed in experimental data.
Conclusion

We identified two distinct types of bursting patterns—elliptic bursting and square-wave bursting in responses of CIII neurons. These findings enhance our understanding of the temperature sensing in insect peripheral sensory neurons, providing insights into how sensory systems respond to environmental stimuli.



Acknowledgements
NIH grant R01NS115209 to DNC and GSC.
References
·1. Turner, H. N., et al. (2016).Current Biology, 26(23): 3116-3128.https://doi.org/10.1016/j.cub.2016.09.038
·2. Maksymchuk, N., et al. (2022).Frontiers in Cellular Neuroscience,16, 831803.https://doi.org/10.3389/fncel.2022.831803

·3. Himmel, N. J., et al., (2023).eLife, 12, e76863.https://doi.org/10.7554/eLife.76863
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P267: Real-time closed-loop perturbation of electrically coupled neurons to characterize sequential dynamics in CPG circuits
Tuesday July 8, 2025 17:00 - 19:00 CEST
P267 Real-time closed-loop perturbation of electrically coupled neurons to characterize sequential dynamics in CPG circuits

Pablo Sanchez-Martin*¹, Alicia Garrido-Peña¹,Manuel Reyes-Sanchez, Irene Elices¹, Rafael Levi¹, Francisco B Rodriguez¹, Pablo Varona¹
1. Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politecnica Superior, Universidad Autónoma de Madrid, Madrid, Spain
*Email: pablo.sanchezm@uam.es

Introduction
Dynamical invariants in the form of robust cycle-by-cycle relationships between intervals that build robust neural sequences have beenobservedrecently in central pattern generatorscircuits (CPGs) [1]. In this study, we analyze the effect of different closed-loop perturbationson electrically coupled neurons that are part of aCPGtodeterminethe associated modulation of sequence interval variability,synchronizationand dynamical invariants.


Methods
This research was performed in the pyloric CPG involving both voltage recordings and current injection in the PD neurons, which are electrically coupled cells in this circuit. Additionally, we recorded extracellularly from the LP neuron to quantify the LPPD delay, an interval that builds a dynamical invariant with the cycle-by-cycle period. We implemented an active electrical compensation procedure [2] on RTXi real-time software, which prevents the recording artifact using a single electrode. Three closed-loop perturbations were delivered on the PD neurons: 1. A Hindmarsh-Rose (HR) model neuron electrically coupled to a PD neuron, thus building a biohybrid circuit. 2. A square pulse current injection during the PD burst. 3. An additional artificial electrical synapse between the two PD neurons.



Results
The electrical coupling with a negative artificial bidirectional synapse did not change the existing invariant relation between the LPPD delay and the period but increased the rhythm variability and increased the Victor-Purpura distance, i.e., reduced the PD synchronization level.The squared pulse perturbation decreased the variability and thus the LPPD delay linear relationship was reduced.The level of synchronization between both PDs was also reduced with the pulse perturbation with respect to the control.The biohybrid circuit built by adding anadditionalelectrical coupling to an artificial HR neuron also reduced the variability but changed the intercept of the linear relationshipi.e.,for the same LPPD delays thePD period was sorter.


Discussion
In this study, we effectively disrupted the dynamics of two electrically coupled neurons with three different perturbations by injecting current into the neurons that modulated the synchronization level.This not onlymodifiedthe dynamics of these neurons but also the whole circuit variability and the associated dynamical invariants.All protocols have been proven effective to study the relationship of electrical coupling and sequential dynamics with the help of real-timeclosed-loopneurotechnologies.




Acknowledgements
Work funded by PID2024-155923NB-I00, CPP2023-010818, PID2023-149669NB-I00 and PID2021-122347NB-I00.
References
[1] I. Elices, R. Levi, D. Arroyo, F. B. Rodriguez, and P. Varona. Robust dynamical invariants in sequential neural activity. Scientific Reports, 9(1):9048, 2019.
[2] R. Brette, Z. Piwkowska, C. Monier, M. Rudolph-Lilith, J. Fournier, M. Levy, and A. Destexhe. High-resolution intracellular recordings using a real-time computational model of the electrode. Neuron, 59(3):379–391, 2008.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P268: Modelling Nitric Oxide Diffusion and Plasticity Modulation in Cerebellar Learning
Tuesday July 8, 2025 17:00 - 19:00 CEST
P268 Modelling Nitric Oxide Diffusion and Plasticity Modulation in Cerebellar Learning

Carlo Andrea Sartori1*, Alessandra Maria Trapani1, Benedetta Gambosi1, Alessandra Pedrocchi1, Alberto Antonietti1

1 Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy

* Email:carloandrea.sartori@polimi.it
Introduction

Nitric Oxide (NO) is an important molecule in processes such as synaptic plasticity and memory formation[1]. In the cerebellum, NO is produced by neural NO Synthase expressed in Granule Cells and Molecular Layer Interneurons[2]. NO diffuses freely in tissues beyond synaptic connections, functioning as a volume neurotransmitter. At parallel fiber-Purkinje Cell (pf-PC) synapses[4][5], NO is necessary but not sufficient for both Long Term Potentiation and Depression[6][7]. This study investigates NO role in cerebellar learning mechanisms using a biologically realistic Spiking Neural Network, implementing a NO-dependent plasticity model and testing it with an Eye-Blink Classical Conditioning (EBCC) protocol[8][9].


Methods
We developed the NO Diffusion Simulator (NODS), a Python module modeling NO production and diffusion within a Spiking Neural Network. The model represents the chemical cascade triggered by calcium influx during spikes, leading to NO production[10]. NO diffusion is modeled using the heat diffusion equation and an inactivation term, solved with Green's function[11]. We implemented a NO-dependent supervised Spike-Timing Dependent Plasticity[12]where a termweights synaptic updates based on NO concentration. The model was tested using the EBCC protocol, where the cerebellum learns to associate a Conditioned Stimulus (CS) with an Unconditioned Stimulus (US), generating anticipatory Conditioned Responses (CR) (Fig. 1).
Results
We first validated the equation in NODS with the single source production of NO performed with NEURON simulator[13].Then we investigated the effect of NO in cerebellar learning trough the addition of different background noises. In principle, the incoming CS and US stimuli should exert a depression only at the pf-PC synapses active right before the US stimuli. By adding an increasing noise these learning processes result directly impaired.When including NO-dependent plasticity, we can highlight a different behavior of during a CS and 4 Hz simulation. Here, only the pf-PC synapses receiving the CS stimuli have sufficient NO for plasticity, while the ones randomly activated by noise remain under threshold.
Discussion
The results demonstrate that NO interaction significantly affects synaptic plasticity, dynamically adjusting learning rates based on synaptic activity patterns. This mechanism enhances the cerebellum's capacity to prioritize relevant inputs and mitigate learning interference selectively modulating synaptic efficacy. Our results prove that NO could act as a noise filter, thus focusing learning in the cerebellum only on the relevant inputs for the ongoing task. The NODS implementation connects the molecular processes and large spiking neural network-level learning. This work underscores the critical role of NO in cerebellar function and offers a robust framework for exploring NO-dependent plasticity in computational neuroscience.





Figure 1. Spiking neural network with NODS mechanism. (A) SNN of the cerebellum microcircuit, with the different populations and detail of CS, US and Background Noise stimuli. (B) One trial of the EBCC protocol with timing of the stimuli. (C) The NO production mechanism at a single synapse. (D) NO as volume transmitter at different pf-PC synapses.
Acknowledgements
This research is supported by Horizon Europe Program for Research and Innovation under Grant Agreement No. 101147319 (EBRAINS 2.0). The simulations in NEURON were implemented by Stefano Masoli, Department of Brain and Behavioral Sciences, Università di Pavia, Pavia, Italy.
References
1. https://doi.org/10.1126/science.1470903
2. https://doi.org/10.1016/s0896-6273(00)80340-2
3. https://doi.org/10.1523/JNEUROSCI.4064-13.2014
4. https://doi.org/10.1074/jbc.M111.289777
5. https://doi.org/10.1016/0006-2952(89)90403-6
6. https://doi.org/10.1073/pnas.122206399
7. https://doi.org/10.1016/j.celrep.2016.03.004
8. https://doi.org/10.3389/fnsys.2022.919761
9. https://doi.org/10.3389/fninf.2018.00088
10. https://doi.org/10.1016/j.niox.2009.07.002
11. https://doi.org/10.3389/fninf.2019.00063
12. https://doi.org/10.1109/TBME.2015.2485301
13. https://doi.org/10.1007/978-3-319-65130-9_9
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P269: Modeling Calcium-Mediated Spike-Timing Dependent Plasticity in Spiking Neural Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P269 Modeling Calcium-Mediated Spike-Timing Dependent Plasticity in Spiking Neural Networks

Francesco De Santis1*,Carlo Andrea Sartori1*, LeoCottini1, RiccardoMainetti1, MatteoMaresca1, Alessandra Pedrocchi1, Alberto Antonietti1

1 Department of Electronics, Information and Bioengineering,Politecnicodi Milano, Milano, Italy

* Email:francesco.desantis@polimi.it,carloandrea.sartori@polimi.it
Introduction

Calcium dynamics serve asbridge between neuronal activity and synaptic plasticity, orchestrating the biochemical cascades thatdeterminesynaptic strengthening (LTP) or weakening (LTD)[1].Extending thework of Graupner and Brunel [2],Chindemiand colleagues recently introduced a data-constrained model of plasticity based on postsynaptic calcium dynamics in the neocortex[3].Themodel has been developed for NEURON simulationscapturing diverse plasticity dynamics with a single parameter set across pyramidal cell-types. In this work, we translatedChindemi’smodel to a spiking neural network by implementing a point neuron model andaunified synapse, testing it across various calcium-concentration scenarios.

Methods
We developed our model using NESTML[4], an open-source language integrated with NEST[5]simulator, enablingthe application of our models to diverse neural networks. The implemented neuron was built upon the existingHill-Tononi(HT)model, which already incorporates detailed NMDA and AMPA conductance dynamics[6].As inChindemi, the synapse was instead based ontheTsodyks-Markram(TM)stochastic synapse model[7], allowing to manipulatevesicle release probability.Following pairedpre-and post-synaptic activitycalcium-dependent processesinfluencesynaptic efficacyat both sides.Our implementation extends these established components to create a comprehensive framework that captures therelationship between calcium dynamics and synaptic plasticity whilemaintainingcomputational efficiency for network-scale simulations.
Results
We firstvalidatedour model for the TM stochastic synapsepaired withHTmodificationstoaccount forcalcium currentspostsynaptic neuron.Then, we connected two neurons and stimulated either the pre-or post-synapticneuron directly, creatingrespectively NMDA andVDCC calcium currents.Next, we testedthepaired activation of pre-and post-synaptic neurons at varying time intervals.The results of these simulationsare comparable withthe ones ofChindemiet al.Finally, we adjusted LTD and LTP thresholds to match calcium signal properties of pyramidal neurons across different cortical layers. Our simpler point neuron model successfully replicatedfindingsobtained with multicompartmentalmodelswhilemaintainingcomputational efficiency.
Discussion
Our workimplementscalcium-dependent plasticity into an efficientmodel for spiking neurons. Wevalidatedthat our point neuron approach reproduces the complex calcium dynamics and plasticity outcomes across different stimulation patterns. Bymaintainingthe ability to capture layer-specific plasticity with adjusted LTP/LTD thresholds, we preserve biological accuracy while reducing computational demands.Our efficient implementation of calcium-dependent plasticitypossibly enableslarge-scale spiking neural network simulations to study how synaptic mechanisms affect network functionality.



Acknowledgements
The work of AA, AP,CAS, andFDSin this research is supported by Horizon Europe Program for Research and Innovation under Grant Agreement No.101147319 (EBRAINS2.0)andEBRAINS-Italy (European BrainReseArchINfrastructureS-Italy),granted by the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the EuropeanUnion –NextGenerationEU(Project IR0000011, CUP B51E22000150006).

References
1.https://doi.org/10.1016/S0959-4388(99)80045-2
2. https://doi.org/10.1073/pnas.1109359109
3.https://doi.org/10.1038/s41467-022-30214-w
4.doi:10.5281/zenodo.12191059
5.https://doi.org/10.4249/scholarpedia.1430
6.https://doi.org/10.1152/jn.00915.2004
7.https://doi.org/10.1073/pnas.94.2.719
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P270: Subthreshold extracellular electric fields alter how neurons respond to naturally occurring synaptic inputs in temporal interference stimulation
Tuesday July 8, 2025 17:00 - 19:00 CEST
P270 Subthreshold extracellular electric fields alter how neurons respond to naturally occurring synaptic inputs in temporal interference stimulation

Ieva Kerseviciute1, Michele Migliore2, Rosanna Migliore2,Ausra Saudargiene*3, Adam Williamson4

1The Life Sciences Center, Vilnius University, Vilnius, Lithuania
2Institute of Biophysics, National Research Council, Palermo, Italy
3Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
4St. Anne’s University Hospital, Brno, Czech Republic

*Email: ausra.saudargiene@lsmu.lt

Introduction



Temporal interference (TI) stimulation enables noninvasive and spatially selective neuromodulation of deep brain structures [1,2]. This approach exploits the nonlinear response of neurons to electric fields by delivering multiple kHz-range oscillations, which interfere and generate an effective low-frequency envelope only at the target site [1,2]. This mechanism allows for selective activation of deep neuronal populations without affecting the overlying tissue. Recent studies have successfully applied this stimulation to the human hippocampus, showing significant effects on memory function [3, 4]. Despite its potential for clinical applications, the neural mechanisms underlying TI-induced effects remain poorly understood.

Methods

We used a biophysically accurate computational neuron model to investigate how subthreshold electric fields influence neural activity in the CA1 hippocampal pyramidal neurons. These neurons receive inputs from Schaffer collaterals, known to play an integral role in memory formation. To replicate this connectivity, we implemented AMPA and NMDA synapses at the proximal apical dendrites, with synaptic activity driven by hippocampal CA3 activity recordedin-vivo. The model neuron was placed in a uniform electric field, simulating the effects of an externally applied field between two conducting plates.

Results

Consistent with previously published modelling results [4], we observed that the electric field strength required to elicit action potentials grew with increasing carrier frequency. Moreover, the subthreshold electric field strength also depended on the orientation of the model neuron in the electric field, requiring higher amplitude when the neuron was perpendicular rather than parallel to the direction of the electric field. Following an long-term potentiation (LTP) induction protocol, the subthreshold stimulation affected the synaptic weight distribution by altering the spike timing, firing frequency, and inter-spike interval patterns. A similar effect was observed with naturally occurring synaptic inputs.

Discussion

In summary, our model shows that subthreshold electric fields alter how neurons respond to naturally occurring synaptic inputs by affecting underlying long-term synaptic plasticity processes. The impact of TI on synaptic plasticity may underlie its effects on memory enhancement, observed in human experiments. The stimulation efficacy is partly determined by the neuron orientation in the electric field, as not all neurons are affected equally. Since our study focuses on single-neuron processes, further research is needed to explore network-level effects.





Acknowledgements




We acknowledge a contribution from the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union – NextGenerationEU (Project IR0000011, CUP B51E22000150006, "EBRAINS-Italy", and support from EU HORIZON-INFRA-2022-SERV-B-01, project 101147319 — EBRAINS 2.0.





References


[1]https://doi.org/10.1016/j.cell.2017.05.024
[2]https://doi.org/10.1126/science.aau4915
[3]https://doi.org/10.1038/s41593-023-01456-8

[4]https://doi.org/10.1101/2024.12.05.24303799
Speakers
avatar for Rosanna Migliore

Rosanna Migliore

Researcher, Istituto di Biofisica - CNR
Computational NeuroscienceEBRAINS-Italy Research Infrastructure for Neuroscience    https://ebrains-italy.eu/
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P271: Accelerated cortical microcircuit simulations on massively distributed memory
Tuesday July 8, 2025 17:00 - 19:00 CEST
P271 Accelerated cortical microcircuit simulations on massively distributed memory

Catherine M. Schoefmann*1,2, Jan Finkbeiner1,2, Susanne Kunkel1


1Neuromorphic Software Ecosystems (PGI-15), Juelich Research Centre, Juelich,
Germany
2RWTH Aachen University, Aachen, Germany

*Email: c.schoefmann@fz-juelich.de
Introduction
Comprehensive simulation studies of dynamical regimes of cortical networks with realistic synaptic densities depend on compute systems capable of running such models significantly faster than biological real time. Since CPUs still are the primary target for established simulators, an inherent bottleneck caused by the von Neumann design is frequent memory access with minimal compute. Distributed memory architectures, popularized by the need for massively parallel and scalable processing for AI workloads, offer an alternative.

Methods
We introduce extensible simulation technology for spiking networks on massively distributed memory using Graphcore's IPUs (https://www.graphcore.ai). We demonstrate the efficiency of the new technology based on simulations of the microcircuit model by [1] commonly used as a reference benchmark. The model represents 1~mm² of cortical tissue, spanning around 300 million synapses, and is considered a building block of cortical function. Spike dynamics are statistically verified by comparison with the same simulations run on CPU with NEST[2].

Results
We present a custom semi-directed communication algorithm especially suited for distributed and constrained memory environments, which allows a controlled trade-off between performance and memory usage. Our simulation code achieves an acceleration factor of 15x compared to real time for the full-scale cortical microcircuit model on the smallest device configuration capable of fitting the model in memory. This is competitive with the current record performance on a static FPGA cluster[3], and further speedup can be achieved at the cost of lower precision weights.

Discussion
With negligible compilation times, the simulation code can be be extended seamlessly to a wide range of synapse and neuron models, as well as structural plasticity, unlocking a new class of models for extensive parameter-space explorations in computational neuroscience. Furthermore, we believe that our algorithm for scalable and parallelisable communication can be efficiently applied to different platforms.
Acknowledgements
The presented conceptual and algorithmic work is part of our long-term collaborative project to provide the technology for neural systems simulations (https://www.nest-initiative.org).
Compute time on a Graphcore Bow Pod64 has been granted by Argonne Leadership Computing Facility (ALCF).
This work is partly funded by Volkswagen Foundation.
References
[1]:https://doi.org/10.1093/cercor/bhs358
[2]:https://doi.org/10.5281/ZENODO.12624784
[3]:https://doi.org/10.3389/fncom.2023.1144143


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P272: Modeling unsigned temporal difference errors in apical dendrites of L5 neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P272 Modeling unsigned temporal difference errors in apical dendrites of L5 neurons

Gwendolin Schoenfeld1,2,3,Matthias C. Tsai*,1,4, Walter Senn4, Fritjof Helmchen1,2,3

1Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
2Neuroscience Center Zürich, University of Zurich and ETH Zurich, Zurich, Switzerland
3University Research Priority Program (URPP), Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
4Computational Neuroscience Group, Department of Physiology, University of Bern, Bern, Switzerland

*Email: tsai@hifo.uzh.ch

Introduction

Learning goal-directed behavior requires the association of salient sensory stimuli with behaviorally relevant outcomes. In the mammalian neocortex, dendrites of pyramidal neurons are suitable association sites, but how their activities adapt during learning remains elusive. Computation-driven theories of cortical function have conjectured that apical dendrites should encode error signals [1,2]. However, little biological evidence has been found to support these proposals. Therefore, we propose a biology-driven approach instead and attempt to explain the function of bottom-up and top-down integration in a model of pyramidal neurons based on experimentally observed apical tuft responses in the sensory cortex during learning.

Methods
We track calcium transients in apical dendrites of layer 5 pyramidal neurons in mouse barrel cortex during texture discrimination learning [3]. Based on this experimental data, we implement a computational model (Fig 1a) incorporating: top-down signals encoding the unsigned temporal difference (TD) error [4], bottom-up signals encoding sensory information, multiplicative gain modulation of firing rates by apical tuft activity, and a local associative plasticity rule comparing top-down signals and somatic firing to dictate apical synapse plasticity. Finally, we test the relevance of apical tuft activity by inhibiting apical tufts during reward and punishment both in our model and experimentally using optogenetics (Fig 1b).
Results
We identify two apical dendrite response types: 1) responses to unexpected outcomes in naïve mice that decrease with growing task proficiency, 2) responses associated with salient sensory stimuli, especially the outcome-predicting texture touch, that strengthen upon learning (Fig 1c). These response types match two distinct unsigned components of the temporal difference error. Our computational model demonstrates how these apical responses can support learning by selectively amplifying the responses of neurons conveying task-relevant sensory signals. This model is contingent upon top-down signals encoding unsigned TD error components, bottom-up signals encoding sensory stimuli, and apical synapses following an associative plasticity rule.
Discussion
Our findings indicate that L5 tuft activities might transmit a salience signal responsible for selectively amplifying neuronal activity during relevant time windows. This picture is in line with theories claiming that the top-down feedback onto apical dendrites is involved in credit assignment. However, instead of transmitting neuron-specific signed errors, our work suggests that the brain could employ a two-step strategy to assign credit to individual neurons. By first solving the temporal credit assignment problem, a temporally precise top-down salience signal can be broadcast to sensory regions, which in a second step — involving local associative plasticity — can be leveraged to recognize and amplify task-relevant responses.




Figure 1. Fig 1. a, Left: Two unsigned TD error components. Middle: Model schematic. Right: State-value estimate and its temporal derivative (signed and unsigned). b, Optogenetic inhibition time during trials (top) and across training (middle). Bottom: Number of trials to reach expert performance in mice and model. c, Calcium imaging (left) and its model (right) across learning for sensory or outcome types.
Acknowledgements
This work was supported by the Swiss National Science Foundation, the European Research Council, the Horizon 2020 European Framework Programme , and the University Research Priority Program (URPP) ‘Adaptive Brain Circuits in Development and Learning’ (AdaBD) of the University of Zurich.
References
1. https://doi.org/10.1016/j.tins.2022.09.007
2. https://doi.org/10.1016/j.tics.2018.12.005
3. https://doi.org/10.1101/2021.12.28.474360
4. https://doi.org/10.1007/BF00115009
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P273: A three-state model for the temporal statistics of mouse behavior
Tuesday July 8, 2025 17:00 - 19:00 CEST
P273 A three-state model for the temporal statistics of mouse behavior

Friedrich Schuessler*1, Paul Mieske2, Anne Jaap3, Henning Sprekeler1

1 Department of Computer Science, Technical University Berlin, German
2German Center for the Protection of Laboratory Animals (Bf3R), German Federal Institute for Risk Assessment, Berlin, Germany
3Department of Veterinary Medicine, Free University of Berlin, Germany

*Email: f.schuessler@tu-berlin.de


Introduction
Neuroscience is undergoing a transition to ever larger and more complex recordings and an accompanying surge of computational models. A quantitative or computational description of behavior, in contrast, is still in dire need [1]. One important aspect of behavior is the temporal structure, which contains rhythmic components (circadian), exponential components with specific time scales (duration of feeding), and components with scale-free temporal dynamics (active motion). Understanding better how these aspects arise and interact, both in the individual and within a group of animals, is an important stepping stone towards computational models of behavior.
Methods
Here we analyze the temporal statistics of behavior of mice housed in different environments and group sizes. The main analyses are based on RFID detections of antennae placed throughout the housing modules. We make particular use of the statistics of inter-detection intervals (IDIs).
Results
We find that behavior spanning seconds to hours can be separated into three distinct temporal ranges: short (0-2min), intermediate (2-20min), long (>20min). IDIs for intermediate and long ranges follow two distinct exponential distributions. Short IDIs are more consistent with a power law or mix of multiple time scales. Blocks of successive short IDIs also follow an exponential distribution. We introduce a simple Markov model that reproduces the temporal statistics.Using additional video recordings, we link the temporal regimes to behavior: Short IDIs to explorative or interactive behaviors, intermediate IDIs to feeding and grooming, and long IDIs to sleeping.
Discussion
Our results show a surprisingly simple structure: Behavior on a fast time scale is interrupted by Internal demands on slower time scales: bouts of fast activity are cut off by the need to feed, and longer sequences of activity and feeding are interrupted by the need to sleep. The short-time aspects of behavior match with observations of scale-free statistics in previous studies [2, 3], but also show interesting deviations potentially due to the interactions in the group. Taken together, our results open up the possibility to understand behavior through the lens of simple models, and raise questions about the neural mechanisms underlying the observed structure.

Acknowledgements
We are grateful for funding by the German Research Foundation (DFG) through the Excellence Strategy program (EXC-2002/1 - Project number 390523135).
References
[1] Datta, S. R., Anderson, D. J., Branson, K., Perona, P., & Leifer, A. (2019). Computational neuroethology: a call to action.Neuron,104(1), 11-24.
[2] Nakamura, T., Takumi, T., Takano, A., Aoyagi, N., Yoshiuchi, K., Struzik, Z. R., & Yamamoto, Y. (2008). Of mice and men—universality and breakdown of behavioral organization.PLoS one,3(4), e2050.
[3] Bialek, W., & Shaevitz, J. W. (2024). Long timescales, individual differences, and scale invariance in animal behavior.Physical review letters,132(4), 048401.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P274: Dynamics of sensory stimulus representations in recurrent neural networks and in mice
Tuesday July 8, 2025 17:00 - 19:00 CEST
P274 Dynamics of sensory stimulus representations in recurrent neural networks and in mice

Lars Schutzeichel*1,2,3, Jan Bauer1,4,5, Peter Bouss1,2, Simon Musall3, David Dahmen1and Moritz
Helias1,2


1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Germany
2Department of Physics, Faculty 1, RWTH Aachen University, Germany
3Institute of Biological Information Processing (IBI-3), Jülich Research Centre, Germany
4Gatsby Unit for Computational Neuroscience, University College London, United Kingdom
5Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Israel


*Email: lars.schutzeichel@rwth-aachen.de
Introduction

The information about external stimuli is encoded in the responses of neuronal populations in the brain [1,2], forming neural representations of the stimuli. The diversity of responses is reflected in the extent of these neural representations in neural state space (Fig. 1a). In recent years, understanding the manifold structure underlying neuronal responses [3] has led to insights into representations in both artificial [4] and biological networks [5]. Here, we extend this theory by examining the role of recurrent network dynamics in deforming stimulus representations over time and their influence on stimulus separability (Fig. 1b). Furthermore, we assess the information conveyed for multiple stimuli (Fig. 1c).
Methods
We simulate recurrent networks of binary neurons and study their dynamics analytically using a two-replica mean-field theory, reducing the dynamics of complex networks to only three relevant dynamical quantities: the population rate and the representation overlaps within and between stimulus classes. These networks are fit to Neuropixels recordings from the superior colliculus of awake behaving mice. To assess the information conveyed by multiple stimuli, we analyze the mutual information between an optimally trained readout and the stimulus class. To calculate the overlap of representations within and across stimulus classes, we utilize spin glass methods [6].
Results
Stimulus separability and its temporal dynamics are shaped by the interplay of three dynamical quantities: the mean population activity E and the overlaps θ= and θ≠, which represent response variability within and across stimulus classes, respectively (Fig. 1b). For multiple stimuli, there is a trade-off: as the number of stimuli increases, more information is conveyed, but stimuli become less separable due to their growing overlap in the finite-dimensional neuronal space (Fig. 1c). We find that the experimentally observed small population activity R lies in a regime where information grows extensively with the number of stimuli, sharply separated from a second regime in which information converges to zero.
Discussion
Separability is a minimal requirement for meaningful information processing: The signal propagates to downstream areas, where, along the processing hierarchy, representations of different perceptual objects must become increasingly separable to enable high-level cognition. Our theory reveals that sparse coding not only provides a crucial advantage for information representation but is also a necessary condition for non-vanishing asymptotic information transfer. Our work thus provides a novel understanding of how collective network dynamics shape stimulus separability.



Figure 1. Overview. a: Stimulus representations characterized by their distance from the origin R and their extent θ=. b: Temporal evolution of representations of stimuli from two classes. A linear readout quantifies the separability between the classes of stimuli for every point in time. c: The separability measure also determines the information content in the population signal for P≥2 stimuli.
Acknowledgements
This work has been supported by DFG project 533396241/SPP2205
References
[1]https://doi.org/10.1126/science.3749885
[2]https://doi.org/10.1016/j.tics.2013.06.007
[3]https://doi.org/10.1103/PhysRevX.8.031003
[4]https://doi.org/10.1038/s41467-020-14578-5
[5]https://doi.org/10.1016/j.cell.2020.09.031
[6]https://doi.org/10.1088/1751-8121/aad52e
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P275: A Computational Framework for Investigating the Impact of Neurostimulation on Different Configurations of Neuronal Assemblies
Tuesday July 8, 2025 17:00 - 19:00 CEST
P275 A Computational Framework for Investigating the Impact of Neurostimulation on Different Configurations of Neuronal Assemblies


Spandan Sengupta*1, 2, Milad Lankarany1, 2, 3, 4, 5

1Krembil Brain Institute, University Health Network, Toronto, ON, Canada
2Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
3Department of Physiology, University of Toronto, Toronto, ON, Canada
4KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada5Center for Advancing Neurotechnological Innovation to Application (CRANIA), Toronto, ON, Canada

*Email: spandan.sengupta@mail.utoronto.ca
Introduction
Pathological oscillations in brain circuits are a known biomarker of neurological disorders, exemplified by increased power in the beta (12-30 Hz) frequency band in Parkinson’s disease [1]. Neurostimulation techniques like Deep Brain Stimulation (DBS) can disrupt pathological oscillations and improve symptoms [2]. However, how and why different stimulation patterns have different impacts on circuits is not fully understood. Recent studies show stimulation-induced biomarkers such as Evoked Resonant Neural Activity (ERNA) [3] associated with stimulation patterns (e.g. frequency) and circuit motifs (e.g. strength of excitatory-inhibitory connectivity) [4]. To study how stimulation patterns impact different circuit motifs, we developed a computational framework to model electrical stimulation on pre and post synaptic activity of neurons embedded in neuronal networks.
Methods
We wanted to study the effect of electrical stimulation, and in particular, the effect of different frequencies during and after stimulation on different circuit motifs. We employed spiking neural networks composed of leaky integrate-and-fire (LIF) neurons combined in a variety of excitatory-inhibitory configurations. To model DBS, we implemented perturbations analogous to electrical stimulation. To further explore how electrical stimulation affects pathological oscillations, we used Brunel’s network [5] tuned to show oscillatory activity at specific frequencies. We aim to study how different patterns of stimulation can suppress these oscillations.

Results
Aligned with experimental findings, our simulations demonstrated that continuous high-frequency electrical stimulation induced more suppression of neuronal activity compared to low-frequency stimulation [6]. In some circuit motifs, we also observed sustained low-frequency oscillatory activity after the high-frequency stimulation had ended (Fig 1B). We aim to characterise the impact of different frequencies on Brunel’s network and their ability to suppress pathological oscillations. We expect to find stimulation patterns that can disrupt these oscillations and qualitatively the circuit activity to a more healthy physiological state. We expect these frequencies to be dependent on the excitatory-inhibitory characteristics of the network.


Discussion
Our study utilizes a simplified model of LIF neurons configured in different motifs, offering a foundational understanding of oscillation modulation with different patterns of electrical stimulation. Future research can expand this model to incorporate more biophysically realistic circuits, such as those found in the hippocampus, critical for memory processing[7], or the basal ganglia, implicated in movement disorders[8]. Investigating these complex circuits will further bridge the gap between computational models and the intricate dynamics of brain networks in health and disease, potentially leading to refined therapeutic strategies.




Figure 1. A: Schematic of a circuit comprised of populations A (exc) and B (inh) that project to O, along with recurrent connections. Electrical stimulation is applied to neurons in A B: Population firing rate during and after 100 Hz electrical stimulation. Dashed red lines indicate that start and end of stimulation. C: Schematic for DBS implementation
Acknowledgements
I would like to thank Dr Frances Skinner (University of Toronto) for her supervision and her help in conceptualising this research idea. I would also to thank Dr Shervin Safavi (Max Planck Institute for Biological Cybernetics) and Dr Thomas Knoesche (Max Planck Institute for Human Cognitive and Brain Sciences) for their help with the modelling and theoretical aspects of this work.

References

https://doi.org/10.1152/jn.00697.2006

https://doi.org/10.1002/mds.22419

https://doi.org/10.1002/ana.25234

https://doi.org/10.1016/j.nbd.2023.106019

https://doi.org/10.1023/A:1008925309027

https://doi.org/10.1016/j.brs.2021.04.022

https://doi.org/10.1038/nature15694

https://doi.org/10.1016/j.baga.2011.05.001




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P276: Simulation tools for reaction-diffusion modeling
Tuesday July 8, 2025 17:00 - 19:00 CEST
P276 Simulation tools for reaction-diffusion modeling

Saana Seppälä*1, Laura Keto1, Derek Ndubuaku1, Annika Mäki1, Tuomo Mäki-Marttunen1, Marja-Leena Linne1, Tiina Manninen1

1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland

*Email: saana.seppala@tuni.fi


Introduction
With advancements in computing power and cellular biology, the simulation of reaction-diffusion systems has gained increasing attention, leading to the development of various simulation algorithms.While we have experience testing separate tools for cell signaling and neuronal networks [1–4], we have not extensively evaluated cell-level tools that integrate both reaction and diffusion algorithms or support co-simulation. This study aims to provide a comprehensive assessment of reaction-diffusion and co-simulation tools, including NEURON [5], ASTRO [6], and NeuroRD [7], to determine their suitability for our research needs.


Methods
Most available reaction-diffusion algorithms and tools are based on partial differential equations or the reaction-diffusion master equation simulated using an extended Gillespie stochastic simulation algorithm [8]. In this study, we implement identical diffusion, reaction, and reaction-diffusion models across selected tools, testing both simple and complex cell morphologies. We conduct simulations, compare results across different tools, and evaluate their consistency. Additionally, we assess the usability and suitability of each tool in various simulation settings, including ease of implementing cell morphologies and equations, computational efficiency, and support for co-simulation.


Results
The simulation algorithms and tools vary significantly in usability and functionality. For instance, some tools support realistic cell morphologies, while others are limited to simplified geometries such as cylinders. Additionally, not all tools allow implementation of reactions involving three reactants, restricting their applicability for certain biological simulations. Despite these differences, a comparison of simulation results across the tools reveals a high degree of similarity, indicating that the underlying models produce consistent outcomes. Furthermore, variations in computational efficiency and ease of implementation are observed, highlighting trade-offs between flexibility, accuracy, and usability across the tools.


Discussion
A thorough understanding of the properties and capabilities of different reaction-diffusion simulation tools is essential for developing more advanced and biologically accurate models. Evaluating these tools provides valuable insights into their strengths and limitations, facilitating the integration of multiple simulation approaches. In particular, this knowledge enables the development of co-simulations that combine reaction diffusion models with spiking network simulations, enhancing the accuracy and scope of computational neuroscience research.




Acknowledgements
This work was supported by the Research Council of Finland (decision numbers 330776, 355256 and 358049), the European Union's Horizon Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project), and the Doctoral School at Tampere University.


References
1.https://doi.org/10.1093/bioinformatics/bti018
2.https://doi.org/10.1155/2011/797250
3.https://doi.org/10.3389/fninf.2018.00020
4.https://doi.org/10.1007/978-3-030-89439-9_4
5.https://doi.org/10.1017/CBO9780511541612
6.https://doi.org/10.1038/s41467-018-05896-w
7.https://doi.org/10.1371/journal.pone.0011725
8.https://doi.org/10.1021/j100540a008
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P277: Relation of metaplasticity with Hebbian, structural and homeostatic plasticities in recurrent neural networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P277 Relation of metaplasticity with Hebbian, structural and homeostatic plasticities in recurrent neural networks

Muhammad Abdul-Amir Shamkhi Al-Shalah1,3, Neda Khalili Sabet2,3, Delaram Eslimi Esfahani3*
1-Department of Secondary Education, Ministry of Education, Babylon Governorate, Iraq
2- Institute of Biology, University ofFreiburg, Freiburg im Breisgau, Germany
3-Department of Animal Biology, Faculty of Biological Sciences, Kharazmi University, Tehran, Iran

Email*:eslimi@khu.ac.ir
Introduction

The Brain plasticites rolling the brain tissues and coordinate its action have different form like hebbian, structural, homeostatic and metaplasticity , Each type of plasticity affects rewiring and influence flow in the brain in neural and circuit level. Furthermore, these different plasticities have relation & interaction between them, the previous studies did not fully cover the relation between all these plasticities.

The objective of this study is to examine and analyses the relation between these plasticity focusing on metaplasticity interaction with hebbain, structural and homeoplasticity.
Methods
This study uses computer simulation and neural networks to explore the relation between structural plasticity, Hebbian, homeostatic, and metaplasticity.
We used artificial neural networks in this study. In our neural network, we modelled neurons as nodes, synapses as edges, and different types of plasticity as network features. We have chosen Python as the programming language to implement our model, and we use the Nest library, one of the most specialised and advanced tools for computational neuroscience research.
Our model contains 500 neurons; hence it has 500 layers. This prevents neuronal supremacy while building or deleting connections. We used the LIF (leaky integrate and fire) neural model or a more specific gif cond. exp. (generalized integrate-and-fire neuron with multiple synaptic time constants).
Results
The result of this study when we examine the different types of plasticity and interaction between them, the metaplasticity caused the growth of synaptic surpluses, which it depends on the amount of receiving stimuli from inside and outside the network. While Structural plasticity causes the use of these excesses in rewiring the network and changing its connections. TheHebbianplasticity from another hand causes the increase or decrease of connections when receiving stimulations and reducing them,
Discussion
finally, in conclusion, homeostatic plasticity shows control on the network in all phases and that lead to regain the network to its original frequency when the stimulation ended.





Acknowledgements
We must express our appreciation to the Vice Chancellor for Research at Kharazmi University for supporting our research.
References
1-doi: 10.1093/nsr/nwaa129.
2-DOI:10.13140/RG.2.2.18527.48803
3-DOI:10.1007/s13194-012-0056-8
4-doi: 10.1111/j.1365-2923.2010.03708.x.
5-doi.org/10.3390/brainsci11040487
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P278: Speeding-up Distinct Bursting Regimes is Mediated by Separate Coregulation Pathways
Tuesday July 8, 2025 17:00 - 19:00 CEST
P278 Speeding-up Distinct Bursting Regimes is Mediated by Separate Coregulation Pathways

Yousif O. Shams1, Ronald L. Calabrese2, Gennady S. Cymbalyuk1,*


1Neuroscience Institute, Georgia State University, Atlanta, Georgia, 30302, USA.
2Department of Biology, Emory University, Atlanta, Georgia, 30322, USA


*E-mail: gcymbalyuk@gmail.com

Introduction
Central pattern generators (CPGs) control rhythmic behaviors and adapt to behavioral demands via neuromodulation[1]. The leech heartbeat CPG includes mutually inhibitory heart interneurons (HNs) forming half-center oscillators (HCOs)[1]. Myomodulin speeds up HCO bursting by increasing h-current (Ih)​ and decreasingNa+/ K+pump current(IPump)[2]. These changes create a coregulation path between dysfunctional regimes[3].Along this path, a new functional regime, high-spike frequency bursting (HFB), emerges alongside low-spike frequency bursting (LFB)[4]. Separately, based on interaction ofIPumpand persistentcurrent,creating relaxation oscillator dynamics,dynamical clamp experiments also show a transition into high- frequency bursting (HFBROd)[5].


Methods

We use experimentally validated Hodgkin-Huxley-style models with Na+dynamics incorporated, which are proven effective in predicting HCO behaviors under various experimental and neuromodulatory conditions[3-6].We conduct a two-parameter sweep, maximalIPump(IPumpMax) andconductance ofIh(gh,to map the activity regimes. We investigate how neuromodulation affects the HCO cycle period of the LFB and HFB regimes, and map experimental data onto the map of regimes.


Results
Under variation ofIPumpMaxandgh​, HCO and single HN show a phase transition between HFB and LFB. In LFB, decreasingIPumpMaxspeeds up bursting, consistent with myomodulin neuromodulation [2,3]. In HFBROd, increasingIPumpMaxalso speeds up bursting by shortening burst duration and interburst interval in accordance with relaxation-oscillator dynamics [5]. Mapping experimental cycle period suggests that myomodulin operates along a coregulation path within LFB regime. This mapping of experimental data of cycle period reveals a quasi-orthogonal path where increasingIPumpMax​ speeds up bursting within HFB regime. Transition between the bursting regimes elucidates monensin effects. Monensin, aantiporter, speeds up bursting via raising intracellularconcentration ([Na+]i), thereby increasingIPump[6].


Conclusions
Modeling suggests the emergence of HFB regime alongside LFB, each with distinct responses to neuromodulation. This captures a paradox of speeding up HCO bursting by either increasing or decreasingIPump. LFB and HFB regimes operate with distinct mechanisms for controlling bursting cycle period. This distinction arises from intracellulardynamics. LFB is responsive to coregulationIhof​​ andIPump​. In contrast, HFB operates with relaxation-oscillator dynamics based on[Na+]i. Our results emphasize that transitioning between LFB and HFB enhances the CPG’s robustness and flexibility, allowing for adaptive control of bursting.





Acknowledgements
We acknowledge Georgia State University’s Brains and Behavior program grant to GSC.
References
1.https://doi.org/10.1152/physrev.00003.2024
2.https://doi.org/10.1152/jn.00340.2005
3.https://doi.org/10.1523/JNEUROSCI.0158-21.2021
4.https://doi.org/10.3389/fncel.2024.1395026
5.https://doi.org/10.1523/ENEURO.0331-22.2023
6.https://doi.org/10.7554/eLife.19322
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P279: Bridging In Vitro and In Vivo Neural Data for Ethical and Efficient Neuroscience Research
Tuesday July 8, 2025 17:00 - 19:00 CEST
P279 Bridging In Vitro and In Vivo Neural Data for Ethical and Efficient Neuroscience Research

Masanori Shimono1
[1] Graduate School of Information Science and Technology, Osaka University, Osaka, Japan
E:m-shimono@ist.osaka-u.ac.jp@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:游明朝; panose-1:2 2 4 0 0 0 0 0 0 0; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}@font-face {font-family:AdvOTf23bb480; panose-1:2 11 6 4 2 2 2 2 2 4; mso-font-alt:Calibri; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:auto; mso-font-signature:3 0 0 0 1 0;}@font-face {font-family:"\@游明朝"; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:none; font-size:10.5pt; mso-bidi-font-size:12.0pt; font-family:"游明朝",serif; mso-ascii-font-family:游明朝; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:游明朝; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:游明朝; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standardcontextual;}a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; mso-themecolor:hyperlink; text-decoration:underline; text-underline:single;}a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:#954F72; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"游明朝",serif; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}div.WordSection1 {page:WordSection1;}
Introduction
Neural activity transmits information via binary spike signals, enabling complex brain computations. While this principle is well established, accurately predicting large-scale neural activity patterns remains challenging. Integrating findings from in vitro and in vivo experiments remains unresolved, yet is crucial for advancing neuroscience and establishing ethical, efficient research methodologies.Methods
We propose a machine learning-basedmutual generationframework to enhance neural activity prediction across experimental paradigms by refining previous methodologies [1]. Specifically, we trained a model using in vitro neural data to predict in vivo activity and vice versa (Fig.1). The model, built with multi-region neural recordings, employs deep learning architectures optimized for spatiotemporal pattern recognition (Fig.1-c). The method details are related to a patent and will be explained at the venue.Results
Our results demonstrate accurate prediction of in vivo neural activity from in vitro data and vice versa (Fig.1-e). We also found that data from specific brain regions reliably predict neural activity across multiple areas, suggesting universal principles in brain information processing. These findings have implications for neural modeling, experimental design, and translational neuroscience. Furthermore, high-precision in vivo prediction from in vitro data could reduce animal experimentation, supporting the3R principles(Replacement, Reduction, Refinement).Discussion
This study sets a new standard for ethical, reproducible neuroscience research, bridging fundamental neuroscience and clinical applications.
Figure 1. Fig. 1) This figure illustrates the time duration of extracted data and data partitioning for training and testing. (a,b) In vitro (top) and in vivo (bottom) setups. (c,d) 5-minute training and 2.5-minute test segments are used for prediction. Four conditions are tested: in vitro→in vitro, in vivo→in vivo, in vitro→in vivo, and in vivo→in vitro. (e) ROC AUC scores evaluate prediction performance.
Acknowledgements

MS is supported by several MEXT fundings (21H01352, 23K18493).
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:"Yu Gothic"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:游ゴシック; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:"MS Pゴシック"; panose-1:2 11 6 0 7 2 5 8 2 4; mso-font-alt:"MS PGothic"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 1791491579 134217746 0 131231 0;}@font-face {font-family:"\@游ゴシック"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:"\@Yu Gothic"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:"\@MS Pゴシック"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 1791491579 134217746 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0mm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"MS Pゴシック",sans-serif; mso-bidi-font-family:"MS Pゴシック";}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"游明朝",serif; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}div.WordSection1 {page:WordSection1;}

References

[1] Nakajima, R., Shirakami, A., Tsumura, H., Matsuda, K., Nakamura, E., & Shimono, M. (2023). Mutual generation in neuronal activity across the brain via deep neural approach, and its network interpretation.Communications Biology,6(1), 1105.
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:"Yu Gothic"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:游ゴシック; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:游明朝; panose-1:2 2 4 0 0 0 0 0 0 0; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}@font-face {font-family:"\@游ゴシック"; panose-1:2 11 4 0 0 0 0 0 0 0; mso-font-alt:"\@Yu Gothic"; mso-font-charset:128; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536870145 717749759 22 0 131231 0;}@font-face {font-family:"\@游明朝"; mso-font-charset:128; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-2147482905 717749503 18 0 131231 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:none; font-size:10.5pt; mso-bidi-font-size:12.0pt; font-family:"游明朝",serif; mso-ascii-font-family:游明朝; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:游明朝; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:游明朝; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt; mso-ligatures:standardcontextual;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"游明朝",serif; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}div.
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P280: Cerebellar neural manifold encoding complex eye movements in 2D
Tuesday July 8, 2025 17:00 - 19:00 CEST
P280 Cerebellar neural manifold encoding complex eye movements in 2D

Juliana Silva de Deus1, Akshay Markanday2, Erik De Schutter1, Peter Thier2,Sungho Hong*1,3



1Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
2Department of Cognitive Neurology, Hertie Institute, University of Tübingen, Tübingen, Germany
3Center for Cognition and Sociality, Institute for Basic Science, Daejeon, South Korea

*Email: sunghohong@ibs.re.kr


Introduction
Kinematic parameters of our movements, such as velocity and duration, can undergo random and systematic changes, but movement endpoints can be precisely maintained. The cerebellum is well-known for its role in thisfunction[1], but how its neurons concurrently encode several kinematic parameters, necessary for movement precision, has been unknown.We recently identified low-dimensional patterns, called the neural manifold, in the activity of the cerebellar neurons and showed that those multi-dimensional patterns encoded the peak velocity and duration of 1D eye movements, contributing to flexible control of those parameters [2]. In this study, we investigated how those findings can extend to 2D eye movements made in different directions.



Methods
We analyzed the activity of 54 cerebellar Purkinje cells (PC) from the oculomotor vermis in three adult male rhesus monkeys performing two different saccadic eye movement tasks. In the first, the animals made 15° saccades from a fixation point to a visual targetrandomly presented at one of the ten angles(0°-315°, 45° intervals).In the second, they performed a cross-axis adaptation task [3] where initial horizontal jumps of a target from a fixation point were followed by 5° vertical leapsbefore finishing the primary saccades. We analyzed the PC simple spike (SS) activity by identifying its low-dimensional manifold and examining how the manifold varies with the saccade angle and complex spike (CS) firing.


Results

In many PCs (n=39), CSs fired between 100ms and 200 ms after a targetonsetwith well-defined preference for certain target directions (θCS-ON), confirming the directional nature of CS firing for retinalslips[4-6]. We also identified the PC-SS manifold (d=4, explaining >88% variance) for saccadic eye movements with a remarkably simple structure comprising direction-independent latent dynamics and dependent, multi-dimensional gain field, generalizing previous studies [2,7]. How CS and SS firings depend on movement direction (θ) in individual PCs was too heterogeneous to show a clear correlation (P=0.22). However, we found that the gain field looked highly organized for θ-θCS-ONbut much less for θ.

Discussion
Together with our previous study [2], these results show that PC population firing has a remarkably simple structure for representing several kinematic parameters of eye movements, such as velocity, duration, and direction, simultaneously and independently via a low-dimensional neural manifold. Our findings suggest that the cerebellar neural circuit generates neural dynamics optimal for flexible and precise control of complex movements with many degrees of freedom.





Acknowledgements
A.M. and P.T. were supported by DFG Research Unit 1847 “The Physiology of distributed computing underlying higher brain functions in non-human primates.” J.S.D., S.H., and E.D.S. were supported by the Okinawa Institute of Science and Technology Graduate University. S.H. was also supported by the Center for Cognition and Sociality (IBS‐R001‐D2), Institute for Basic Science, South Korea.
References
● https://doi.org/10.1146/annurev-vision-091718-015000
● https://doi.org/10.1038/s41467-023-37981-0
● https://doi.org/10.1007/BF00228022
● https://doi.org/10.1038/33141
● https://doi.org/10.1523/JNEUROSCI.4658-05.2006
● https://doi.org/10.1152/jn.90526.2008
● https://doi.org/10.1038/nature15693




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P281: Data-driven biophysically detailed computational modeling of neuronal circuits with the NeuroML standard and software ecosystem
Tuesday July 8, 2025 17:00 - 19:00 CEST
P281 Data-driven biophysically detailed computational modeling of neuronal circuits with the NeuroML standard and software ecosystem

Ankur Sinha*1, Padraig Gleeson1, Adam Ponzi1, Subhasis Ray2, Sotirios Panagiotou3, Boris Marin4, Robin Angus Silver1

1Department of Neuroscience, Physiology and Pharmacology, University College
2TCG CREST, Kolkata, India
3Erasmus University Rotterdam, Rotterdam, Netherlands
4Universidade Federal do ABC, São Bernardo do Campo, Brazil

*Email: ankur.sinha@ucl.ac.uk

Introduction

Computational models are essential for integrating multiscale experimental data into unified theories and generating new testable hypotheses. Realistic models that include biological intricacies of neurons (morphologies, ionic conductances, subcellular processes) are critical tools for gaining a mechanistic understanding of neuronal processes. Their complexity and the disjointed landscape of software for computational neuroscience, however, makes model construction, fitting to experimental data, simulation, and re-use and dissemination a considerable challenge. Here, we present NeuroML and show that it accelerates modelling workflows and promotes FAIR (Findable, Accessible, Interoperable, Reusable) and Open computational neuroscience[1].

Methods
NeuroML provides two components: a standard and a software ecosystem. The standard is specified by a two part schema. The first part constrains the structure of NeuroML models and is used to validate model descriptions and generate libraries for programming languages. The second part consists of corresponding definitions of the dynamics of model entities in the Low Entropy Modelling Specification language[2] that allows translation of NeuroML models into simulator specific formats. The software ecosystem includes libraries and tools for building and working with NeuroML models in addition to a number of simulation engines and other NeuroML compliant tools that support different stages of the model life cycle.
Results
NeuroML is an established standardised language that provides a simulator independent model representation and accompanying ecosystem of compliant tools that support all stages of the model life cycle: creating, validating, visualising, analysing, simulating, optimising, sharing, re-using models. It provides a curated set of model building blocks for constructing new models and thus also serves as a didactic resource. We demonstrate how NeuroML supports the model life cycle by presenting a number of published NeuroML models in different species (C. elegans, rodents, humans) and different brain regions (cortex, cerebellum), highlighting their scientific contributions. We also list resources on using NeuroML and existing models.
Discussion
NeuroML is a mature standard that has evolved over years of interactions with the computational neuroscience community. The NeuroML community has strong links with simulator development communities to ensure that NeuroML remains up to date with the latest modelling requirements, and that tools remain NeuroML compliant. NeuroML also ensures that it remains extensible to cater to modelling entities that are not yet part of the standard. NeuroML also links to other neuroscience initiatives (PyNN, SONATA[3]), systems biology standards (SBML, SED-ML) and machine learning/AI formats (Model Description Format[4]) to promote interoperability. Finally, a large archive of published standardised models supports re-use of existing models.




Acknowledgements
We thank all members of the NeuroML community who have contributed to the development of the standard and the software ecosystem over the years.
References
● https://doi.org/10.7554/eLife.95135
● https://doi.org/10.3389/fninf.2014.00079
● https://doi.org/10.1371/journal.pcbi.1007696
● https://doi.org/10.1007/s10827-024-00871-5



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P282: Population-level mechanisms of model arbitration in the prefrontal cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P282 Population-level mechanisms of model arbitration in the prefrontal cortex

Jae Hyung Woo1*, Michael C Wang1*, Ramon Bartolo2, Bruno B. Averbeck3,Alireza Soltani1+

1Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
2The Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
3Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD 20892, USA
+Email:alireza.soltani@gmail.com
Introduction

One of the biggest challenges of learning in naturalistic settings is that every choice option involves multiple attributes or features, each of which could potentially predict the outcome. To make decisions accurately and efficiently, the brain must attribute the outcome to the relevant features of a choice or action, while disregarding the irrelevant ones. To manage this uncertainty, it is proposed that the brain maintains several internal models of the environment––each predicting outcomes based on different attributes of choice options––and utilizes the reliability of these models to select the appropriate one to guide decision making [1-3].


Methods
To uncover computational and neural mechanisms underlying model arbitration, we reanalyzed data from high-density recordings of the lateral prefrontal cortex (PFC) activity in monkeys performing a probabilistic reversal learning task with uncertainty about the correct model of the environment. We constructed multiple computational models based on reinforcement learning (RL) to fit choice behavior on a trial-by-trial basis, which allowed us to infer animals’ learning and arbitration strategies. We then used estimates based on the best-fitting model to identify single-cell and population-level neural signals related to learning and arbitration in the lateral PFC.

Results
We found evidence of dynamic, competitive interactions between stimulus-based and action-based learning, alongside single-cell and population-level representations of the arbitration weight. Arbitration enhanced task-relevant variables, suppressed irrelevant ones, and modulated the geometry of PFC representations by aligning differential value axes with the choice axis when relevant and making them orthogonal when irrelevant. Reward feedback emerged as a potential mechanism for these changes, as reward enhanced the representation of relevant differential values and choice while adjusting the alignment between differential value and choice subspaces according to the adopted learning strategy.

Discussion
Overall, our results shed light on two major mechanisms for the dynamic interaction between model arbitration and value representation in the lateral PFC. Moreover, they provide evidence for a set of unified computational and neural mechanisms for behavioral flexibility in naturalistic environments, where there is no cue that explicitly signals the correct model of the environment.




Acknowledgements
None
References

1.https://doi.org/10.1038/s41386-021-01123-1
2.https://doi.org/10.1016/j.neubiorev.2020.10.022
3.https://doi.org/10.1038/s41386-021-01108-0
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P283: A biophysical model of CA1 pyramidal cell heterogeneity in memory stability and flexible decision-making
Tuesday July 8, 2025 17:00 - 19:00 CEST
P283 A biophysical model of CA1 pyramidal cell heterogeneity in memory stability and flexible decision-making

Fei Song*1,2, Bailu Si3,4


1State Key Laboratory of Robotics, Shenyang Institute of Automation Chinese Academy of Science, Shenyang, Liaoning, China
2University of Chinese Academy of Sciences, Beijing, Beijing, China
3School of Systems Sciences, Beijing Normal University, Beijing, Beijing, China
4Chinese Institute for Brain Research, Beijing, Beijing, China


*Email: songfeo20160903@gmail.com
Introduction

The entorhinal-hippocampal system is essential for spatial memory and navigation. CA1 integrates spatial and non-spatial inputs via two pathways: the perforant path (PP) and temporoammonic path (TA), processed by pyramidal cells (PCs) [1]. We propose a biophysical model with simple PCs and complex PCs [2]. Simulations in novel environments (Fig. 1a) show sPCs maintain stable spatial coding, while cPCs integrate spatial and attentional inputs, supporting decision-making. In familiar settings (Fig. 1b), cPCs adapt to changes while sPCs preserve stable encoding, enabling memory retention and comparison of past and new experiences. This model unifies CA1’s roles in memory and decision-making.

Methods
We model CA1 as a two-layer network: deep-layer sPCs receive MEC input, while superficial-layer cPCs integrate MEC and LEC signals. Synaptic plasticity follows Hebbian learning, with SC weights adapting via dendritic-somatic co-activation and TA weights via rate-dependent learning, constrained by proximal-distal gradients. Simulations include a 10m track and 5m open-field, where MEC provides grid-cell input, LEC encodes egocentric cues, and CA3 supplies place-cell activity [3,4]. Memory recovery is evaluated via place field stability (JS distance), while stimulus-specific information quantifies spatial and attentional encoding variability [5]. A population decoder (MLP) predicts location and attention from CA1 activity.
Results
CA1 supports flexible decision-making by integrating spatial and perceptual information. In novel environments, sPCs ensure spatial stability, while cPCs encode stimulus-specific cues. A proximal-distal gradient in cPCs appears with fixed cues but disappears with moving cues, confirming their adaptive role. Population decoding shows cPCs excel in attention tracking, while sPCs maintain spatial coding. CA1 also aids memory updating. When CA3 recall is incomplete, CA1 preserves past memories longer than expected, slowing decay. When TA introduces novelty, cPCs encode new inputs while sPCs retain old ones, enabling stable yet adaptive memory processing. This mirrors real-world experiences, such as recognizing familiar but altered locations.
Discussion
Our model captures CA1 neuron heterogeneity and projection preferences in decision-making and memory updating. However, it simplifies CA3’s proximodistal heterogeneity, where pattern separation (proximal) and completion (distal) may influence CA1 dynamics [6]. Future work should refine CA3 input representation. CA1’s dual-pathway structure aligns with cognitive map theory, where novel environments require integration, while familiar ones involve consolidation. This parallels the Tolman-Eichenbaum Machine (TEM) model of hippocampal function [7]. The dual-pathway structure may reflect a generalized neuronal computation mechanism, extending beyond navigation and memory to broader cognitive functions.




Figure 1. Fig. 1 Functional Framework of the Hippocampus. (a) CA1 supports flexible decision-making in novel environments by integrating sensory inputs and generating context-specific representations. (b) CA1 facilitates memory updating in familiar environments by comparing stored memories with current experiences.
Acknowledgements
Not applicable.
References
● https://doi.org/10.1038/nn.2894
● https://doi.org/10.1038/nn.4517
● https://doi.org/10.1016/j.neucom.2020.10.013
● https://doi.org/10.1007/BF00237147
● https://api.semanticscholar.org/CorpusID:10081513
● https://doi.org/10.1371/journal.pbio.2006100
● https://doi.org/10.1016/j.cell.2020.10.024



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P284: Subthalamic LFP Spectral Decay Captures Movement-Related Differences Between Parkinson’s Disease Phenotypes
Tuesday July 8, 2025 17:00 - 19:00 CEST
P284 Subthalamic LFP Spectral Decay Captures Movement-Related Differences Between Parkinson’s Disease Phenotypes

Luiz Ricardo Trajano da Silva1, Maria Sheila Guimarães Rocha2, Slawomir Nasuto3, Bradley Voytek4, Fabio Godinho5,Diogo Coutinho Soriano*1

1Center of Engineering, Modeling and Applied Social Sciences, Federal University of ABC (UFABC), São Bernardo do Campo, Brazil
2Department of Neurology, Santa Marcelina Hospital, São Paulo, Brazil
3University of Reading, Berkshire, United Kingdom
4Department of Cognitive Science, Halıcıoğlu Data Science Institute, University of California, San Diego, La Jolla, CA, USA
5Division of Neurosurgery, Department of Neurology, Hospital das Clínicas, University of São Paulo Medical School, São Paulo, Brazil

*Email: diogo.soriano@ufabc.edu.br
Introduction

Parkinson’s Disease (PD) is a heterogeneous neurodegenerative disorder characterized by a wide range of motor and non-motor symptoms [1]. Movement disorder specialists classify PD into subtypes, including tremor dominant (TD) and postural instability and gait disorder (PIGD) [2, 3, 4]. One promising robust biomarker for deep brain stimulation (DBS) therapy is the 1/f^chi spectral decay observed in local field potentials (LFPs). This decay has been linked to the excitatory/inhibitory synaptic balance, providing valuable insights into neuronal circuit dynamics [5, 6, 7, 8]. Therefore, this study explores changes in the spectral decay across rest and movement conditions in different PD phenotypes, aiming to advance personalized DBS strategies.
Methods
STN-LFP recordings from 35 hemispheres (15 TD, 20 PIGD) during rest and movement (elbow extension and flexion) conditions (1 minute each) were acquired during the intraoperative procedure for implanting DBS electrodes. Welch periodogram and spectral parametrization, as proposed in [5], were used to estimate the LFP adjusted low beta (13–22 Hz), high beta (22–35 Hz) rhythms bandpower (i.e., corrected by 1/f^chi background), and the spectral decay parameter chi. Mixed-ANOVA was used to evaluate differences between subtypes and rest/movement conditions. The procedure was approved by the ethical committee for research in human beings(CAAE: 62418316.9.2004.0066).
Results
(Fig. 1) shows the parametrized spectral decay for TD (A) and PIGD (B) phenotypes, the respective PSDs adjusted by 1/f^chi (panels C and D), and the box plots for the bandpower rhythms and spectral decay (E, F, G). Lower beta power showed an interaction between phenotype and motor condition (F(1,33) = 6.67, p = 0.014), with a significant decrease during movement (p = 0.003) for the TD group. High beta bandpower showed a marginal effect for phenotype during rest (F(1,33) = 3.39, p = 0.07). The spectral decay exponent also indicates an interaction between phenotype and the motor condition (F(1,33) = 5.67, p = 0.02), with a post-hoc analysis unveiling a marginal phenotype difference during movement (p = 0.088).
Discussion
Spectral parameterization revealed significant differences between the TD and PIGD subtypes, highlighting distinct neuronal dynamics in the subthalamic nucleus (STN) during movement (elbow flexion). Our findings indicate that beta-band suppression during movement, as documented in previous studies [9–12], is predominantly driven by TD patients. Conversely, the PIGD group showed increased high-beta activity, which has been linked to motor rigidity symptoms [13], along with a steeper aperiodic exponential decay, suggesting a more inhibited synaptic balance in the STN during movement. These results highlight the potential of spectral decay components as biomarkers for personalized DBS strategies for PD patients.





Figure 1. Figure 1 – Aperiodic-adjusted and Aperiodic Component PSDs and Grouped Boxplot for subtype and rest/movement conditions. A and B, aperiodic component PSD for TD and PIGD groups, respectively. C and D, Aperiodic-adjusted PSDs for TD and PIGD groups, respectively. E, F, and G, Boxplot for subtype and rest/movement conditions exhibiting mixed-ANOVA results. (.) 0.05 < 𝘱 < 0.1 ;*𝘱 < 0.05;**𝘱 < 0.01
Acknowledgements
Authors acknowledge the financial support of the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)- Finance Code 001 and CNPq (grant number 313970/2023-8).
References
1.https://doi.org/10.1038/s41582-021-00486-9
2.https://doi.org/10.1016/S0140-6736(21)00218-X
3.https://doi.org/10.1002/acn3.312
4.https://doi.org/10.1016/j.parkreldis.2019.05.024
5.https://doi.org/10.1038/s41593-020-00744-x
6.https://doi.org/10.1038/s41531-018-0068-y
7.https://doi.org/10.1016/j.neuroimage.2017.06.078
8.https://doi.org/10.1523/JNEUROSCI.2041-09.2009
9.https://doi.org/10.1016/j.expneurol.2012.05.013
10.https://doi.org/10.1093/brain/awh106
11.https://doi.org/10.1002/mds.10358
12.https://doi.org/10.1093/brain/awf135
13.https://doi.org/doi: 10.1002/mds.26759

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P285: A two-region recurrent neural network reproduces the cortical dynamics underlying subjective visual perception
Tuesday July 8, 2025 17:00 - 19:00 CEST
P285 A two-region recurrent neural network reproduces the cortical dynamics underlying subjective visual perception

Artemio Soto-Breceda1, Nathan Faivre2, João Barbosa3,4,Michael Pereira1


1.Univ. Grenoble Alpes, Inserm, Grenoble Institut Neurosciences, 38000 Grenoble, France
2.Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
3 Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
4 Institut de Neuromodulation, GHU Paris Psychiatrie et Neurosciences, Centre Hospitalier Sainte-Anne, Université Paris Cité, Paris, France
Introduction

This study aims to model the cortical activity associated with the detection of visual stimuli, as well as the subjective duration of visual percepts and associated confidence. We propose a two-region neural network model: a sensory region integrating sensory inputs and a decision region with longer integration timescales. The model is constrained by biological parameters to simulate region-dependent temporal integration and includes top-down feedback and excitation-inhibition balance to test hypotheses on the neural basis of perception.


Methods
The model consists of a recurrent rate-based neural network of excitatory (80%) and inhibitory (20%) neurons with GABA, AMPA, and NMDA synapses. The sensory region receives and integrates sensory inputs and projects to a decision region with longer integration timescales. This decision region defines whether and when a near-threshold stimulus is detected. The dynamics of the simulated neural activity in the sensory region were compared to the dynamics of neural activity consisting of local field potentials recorded using stereotaxic EEG in humans undergoing epilepsy monitoring and associated behavioral measures of detection, response times, subjective confidence and subjective duration collected with a time-reproduction task [1].
Results
The model successfully replicated key behavioral metrics. Qualitatively, simulated activity in the decision region matched high-gamma activity recorded in the anterior insula, while sensory region activity aligned with activity in the inferior temporal cortex during a face detection task. We find that, for example, temporal integration in sensory regions explains the magnitude-duration illusion, where higher intensity stimuli are perceived as longer. We also examined model predictions when altering the E/I ratio by changing the synaptic strength of NMDA receptors in either the excitatory or inhibitory population [2], or modulating the top-down feedback. We intend to test alternative models corresponding to different hypotheses on how temporal integration explains subjective aspects of perception such as duration and confidence.
Discussion
Many studies have provided computational models of perceptual decision-making. However, the neuronal mechanisms underlying the subjective aspects of perception remain poorly understood. Here, starting from a model of decision-making [3], we harness temporal properties of these subjective aspects of perception to isolate the underlying neuronal mechanism. The model is able to predict behavior in perceptual decision-making tasks. This model allows us to investigate how biological parameters such as E/I balance or top-down feedback affect behavior and cortical activity during perceptual decision-making tasks. We will interpret our findings in the context of current theories of consciousness.





Acknowledgements
-
References
● https://doi.org/10.1101/2024.03.20.585198
● https://doi.org/10.1523/JNEUROSCI.1371-20.2021
● https://doi.org/10.1016/S0896-6273(02)01092-9




Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P286: Learning in Visual Cortex: Sparseness, Balance, Decorrelation, and the Parameters of the Leaky Integrate-and-Fire Model.
Tuesday July 8, 2025 17:00 - 19:00 CEST
P286 Learning in Visual Cortex: Sparseness, Balance, Decorrelation, and the Parameters of the Leaky Integrate-and-Fire Model.

Martin J. Spencer1*, Marko A. Ruslim1*, Hinze Hogendoorn2, Hamish Meffin1, Yanbo Lian1, Anthony N. Burkitt1,3

1Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria 3010, Australia
2School of Psychology and Counselling, Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia
3Graeme Clark Institute for Biomedical Engineering, University of Melbourne, Melbourne, Victoria 3010, Australia

*Equal first authors. Email:martin.spencer@unimelb.edu.au
InIntroduction:Sparseness is a known property of information representation in the cortex [1]. A sparse neural code represents the underlying causes of sensory stimuli and is resource efficient [2]. Computational models of sparse coding in the visual cortex typically use an objective function with an information maximization term and a neural activity minimization term, a top-down approach [3]. In contrast, this study trained a spiking neural network using Spike-Timing-Dependent Plasticity (STDP) learning rules [4]. The resulting sparseness, decorrelation, and balance in the network was then quantified; a bottom-up approach [5]. To confirm the mechanisms of sparseness, results were replicated across 3 models of increasing complexity.
Methods:A biologically grounded V1 model was made up of separate populations of excitatory and inhibitory Leaky Integrate and Fire (LIF) neurons with all-to-all connectivity via delta-current synapses. Input was provided by Poisson neurons with spike rates representing the output of separate ON and OFF neurons calculated using a centre-surround whitening filter applied to natural images.
The V1 LIF neuron spike rates were maintained at a target rate using a homeostatic threshold adjustment. Synaptic weights were adjusted using a triplet STDP rule [4] for the excitatory-excitatory neuron synapses and a symmetric STDP rule for other connections. Learning was normalised using subtractive normalisation and multiplicative normalisation.
Results:Training was performed using 1200 batches of 100 natural image patches for 400 ms each (~11 hours). There were 512 LGN neurons (256 ON, 256 OFF) and 500 V1 neurons (400 Excitatory, and 100 Inhibitory). The network was found to achieve a sparse representation. The level of sparseness was found to depend on the parameters of the LIF model. These mechanisms were additionally explored in a simple single neuron model and computationally efficient smaller model (Figure 1). Decorrelation was observed to result from the weights chosen by STDP. ‘Loose’ and ‘tight’ balance was confirmed using comparison of the relative strength of excitatory and inhibitory input.
Discussion:In the biologically grounded V1 model the results showed that the balance was maintained across long (~1 s) and short (~10 ms) times scales. Where pairs of neurons had receptive fields with high correlations it was found that there was correlated high mutual inhibition leading to diversity and information maximization in the network.
In all 3 models, higher sparseness (ς) was caused by lower output spike rates in the LIF neurons (Figure 1 A and C, efficient model). In the efficient and biologically grounded models this was associated more Gabor-like receptive fields (Figure 1 B and D). Other parameters of the LIF model were also examined, including membrane time constant, input spike rate and number of inputs.



Figure 1. Figure 1: (A) Sparseness (ς) measured in the computationally efficient V1 neuron model of 64 neurons with a 5 Hz target mean spike rate. (B) Associated normalised synaptic weight weights to 9 V1 neurons from the ON (red) and OFF (blue) input neurons. (C-D) 30 Hz target mean spike rate.
Acknowledgements
Acknowledgements

This work was supported by an Australian Research Council Discovery Grant (DP220101166).
References
References
[1] - https://doi.org/10.1038/s41467-020-14645-x
[2] - https://doi.org/10.1038/srep17531
[3] - https://doi.org/10.1038/381607a0
[4] -https://doi.org/10.1523/JNEUROSCI.1425-06.2006
[5] - https://doi.org/10.1101/2024.12.05.627100
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P287: Pulsatile and direct current stimulation in functional network model of decision making
Tuesday July 8, 2025 17:00 - 19:00 CEST
P287 Pulsatile and direct current stimulation in functional network model of decision making

Cynthia Steinhardt*1,2,Paul Adkisson3, Gene Fridman3
1 Simons Society of Fellows, Junior Fellow, New York, New York 10010
2 Center for Theoretical Neuroscience, Zuckerman Brain Science Institute, Columbia University, New York, New York 10027
3 Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland 21287
*Email: cs4248@columbia.edu

Introduction
Pulsatile stimulation has been the primary method for neural implants in sensory restoration and neuropathology treatment (e.g., Parkinson’s, epilepsy) since the first neural implant [1]. Recently, non-invasive transcranial direct/alternating current (DC/AC) stimulation has gained interest, offering broader accessibility without surgery. However, to be viable, effective non-invasive alternatives must match or exceed the efficacy of implants in modulating neural circuits. Pulsatile and DC stimulation effects in complex networks have not been directly compared due to the need for detailed biophysical models. We address this gap.
Methods
Our prior work showed that pulsatile stimulation alters firing patterns in single neurons in complex ways depending on pulse parameters and spontaneous activity [3]. Similarly, we modeled and characterized the effects of DC stimulation on single neurons [4]. Here, we extend these models, modifying linear-integrate-and-fire (LIF) models to include approximations of these effects so that we can accurately simulate local stimulation in a 1000-neuron network. We simulate pulsatile and DC stimulation at equivalent local dosing levels and at behaviorally equivalent levels and compare network effects in a winner-take-all decision-making circuit for motion detection.
Results
The network processes moving dots and determines whether the majority are moving left or right. We identified pulse rates for suprathreshold pulses that match DC stimulation’s effects on the firing rate in the left-motion detection part of the network. At this level, pulsatile stimulation induced a stronger, faster bias toward leftward decisions. When matched for behavioral bias, pulsatile stimulation resisted feedback inhibition and had conflicting effects with recurrent feedback. DC stimulation, in contrast, propagated through the network more strongly due to recurrent excitation but was more affected by feedback inhibition [5].
Discussion
This study provides the first direct comparison of how pulsatile and DC stimulation influence network activity up to the behavioral-level, using accurate approximations of electrical stimulation. We show that these two forms of stimulation interact differently with network dynamics, suggesting different therapeutic applications. Additionally, we present open-access tools for modeling, which could enhance patient-specific disease models. These tools allow for mechanistic insights beyond the LIF and threshold models currently used.



Acknowledgements
Acknowledgments
We thank the Simons Society of Fellows (965377), Gatsby Charitable Trust (GAT3708), Kavli Foundation, and NIH (R01NS110893) for support.
References
References
1.Loeb, G. E. (2018). Neural Prosthetics.Appl Bionics Biomech,2018, 1435030.
2.Giordano, J., et al. (2017). Mechanisms of tDCS.Dose-Response,15(1), 1559325816685467.
3.Steinhardt, C. R., et al. (2024). Pulsatile stimulation disrupts firing.Nat Commun,15(1), 5861.
4.Steinhardt, C. R., & Fridman, G. Y. (2021). DC effects on afferents.iScience,24(3).
5.Adkisson, P. W., et al. (2024). Galvanic vs. pulsatile effects.J Neural Eng,21(2), 026021.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P288: Modelling the temperature profile of the retina in response to nanophotonic neuromodulation
Tuesday July 8, 2025 17:00 - 19:00 CEST
P288 Modelling the temperature profile of the retina in response to nanophotonic neuromodulation

Daniel B Richardson1, James Begeng1, Paul R Stoddart1,Tatiana Kameneva*1,2




1Department of Engineering Technologies, School of Engineering, Swinburne University of Technology, Australia




2Iverson Health Innovation Institute, Swinburne University of Technology, Australia


*Email: tkam@swin.edu.au



Electrical stimulation of neurons has been used as a reliable technique to elicit action potentials in implantable devices. Recently, novel optical stimulation techniques have been developed as alternatives to electrical stimulation. One approach involves applying near infrared wavelengths of light to stimulate neurons. Neurophotonic stimulation may increase the resultant visual acuity compared to electrical stimulation as it does not apply any current and thus has no current spread. As a result of applying nanophotonic stimulation, the retina experiences an increase in temperature. For this reason, modelling the temperature profile within the retina is vital in testing the feasibility of optical stimulation techniques.
Step 1: To model the temperature profile in a retina environment, a Monte Carlo simulation was implemented in MATLAB. The environment consisted of four layers: water, gold nanorods, retinal tissue, and a layer of glass. A 750nm beam was used to simulate near infrared stimulation at varying powers that matched the experimental values of Begeng et al (2023). Each layer had specified coefficients obtained from literature, which included the absorption and scattering coefficients, scattering anisotropy, volumetric heat capacity, and thermal conductivity. The simulation models the temperature profile through finite element modelling of the defined geometry. It determines the temperature through tracking the photon paths of the stimulation beam, monitoring how it progresses through the tissues via their varying scattering coefficients and refractive indexes. It then models the florescence and absorption of the tissues through probabilistic determination. Theamountof photons absorbed, and its associated power, is then used in conjunction with the heat equation to determine the temperature.
Step 2: A single-compartment Hodgkin-Huxley model of a temperature-sensitive rat RGC was constructed in the NEURON simulation environment. The model uses the Gouy-Chapman-Stern theory of temperature-variant bilayer capacitance, and experimentally-derived temperature dependence for key sodium, potassium, calcium and leak ion channels, as well as cytosolic resistance. Thermal profiles for the pulse durations were approximated using the thermal model described in Step 1.

The simulation temperature model demonstrated general agreement with the experimental results, showing comparable peak temperatures and maintaining a consistent trend with the varying pulse durations. Furthermore, the proposed temperature model allows for estimation of the temperature profile on the retinal surface, which is difficult to measure experimentally. Hodgkin-Huxley model replicated the main features of nanophotonic stimulation, including an initial driving subthreshold depolarisation hump, followed by an action potential, inhibition and excitation phenomena, that were dependent on the pulse duration.





Acknowledgements
-
References
Begeng JM, Tong W, Rosal B, Ibbotson M, Kameneva T, Stoddart PR (2023) Activity of retinal neurons can be modulated by tunable near-infrared nanoparticle sensors, ACS Nano 17 (3), 2079 – 2088,doi/10.1021/acsnano.2c07663
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P289: When Can Activity-Dependent Homeostatic Plasticity Maintain Circuit-Level Dynamic Properties with Local Activity Information?
Tuesday July 8, 2025 17:00 - 19:00 CEST
P289 When Can Activity-Dependent Homeostatic Plasticity Maintain Circuit-Level Dynamic Properties with Local Activity Information?

Lindsay Stolting*1, Randall D. Beer1

1Cognitive Science Department, Indiana University, Bloomington, IN, USA

*Email: lstoltin@iu.edu

Introduction

Neural circuits are remarkably robust to perturbations that threaten their function. One mechanism behind this robustness is activity-dependent homeostatic plasticity (ADHP), which tunes neural membrane and synaptic properties to ensure moderate and sustainable average activity levels [1]. The dynamics of behaving neural circuits, however, must often satisfy stricter requirements than just reasonable activity levels. For instance, successful behavior may require a specific temporal structure or phasing relationships between neurons–properties which cannot be specified by time-averaged activity information at the single-neuron level. How, then, does ADHP maintain such properties?

Methods
We explored this question in a computational model of the crustacean pyloric pattern generator, which exhibits a triphasic burst rhythm [2]. We stochastically optimize 100 continuous time recurrent neural networks to match pyloric burst ordering, then add ADHP to these models by placing two network parameters under homeostatic control. These parameters are tuned according to the temporally averaged activity of the corresponding neuron, relative to some target range [3]. The averaging window and target range are stochastically optimized 10 times for each pyloric network, with the goal of parameterizing an ADHP mechanism that recovers pyloricness after perturbation of controlled parameters.
Results
This results in a data set of ADHP mechanisms which maintain pyloricness in a degenerate set of pyloric network models to varying degrees of success. Though there are typically no true fixed points in these models, we find we can leverage timescale separation assumptions to predict asymptotic parameter configurations. We can then derive general conditions for ADHP’s success, according to whether homeostatic endpoints are also pyloric (Figure 1). More generally, we can predict for any individual pyloric network the range of homeostatic mechanisms that successfully maintain it, and validate these predictions with numerical simulation.
Discussion
Even though temporally defined properties like pyloricness cannot be directly specified by average activity levels, they can be maintained by activity-dependent homeostatic plasticity under specific conditions. To define these conditions, one must consider the set of perturbations with which the circuit may contend, in conjunction with the dynamic properties of the homeostatic mechanism itself. This work therefore suggests several avenues for experimental investigation, where responses to perturbation provide clues about homeostatic mechanisms, and knowledge of homeostatic mechanisms predicts responses to perturbation.




Figure 1. Differently parameterized ADHP mechanisms differentially recover pyloricness in a model circuit. ADHP endpoints are predicted by the overlap between target activity levels and average activity of regulated neurons. The intersection of these pseudo-nullclines may lie in or outside the pyloric region (black), resulting in successful (green), conditionally successful (yellow), or failing (red) ADHP.
Acknowledgements

This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
References
[1] Turrigiano, G. (1999). Homeostatic plasticity in neuronal networks: The more things change, the more they stay the same.Trends in Neurosciences,22(5), 221–227.https://doi.org/10/frf24n
[2] Harris-Warrick, R. M. (Ed.). (1992). Dynamic biological networks: the stomatogastric nervous system. MIT press.
[3] Williams, H. (2005). Homeostatic plasticity improves continuous-time recurrent neural networks as a behavioural substrate. Proceedings of the International Symposium on Adaptive Motion in Animals and Machines, AMAM2005. Ilmenau, Germany: Technische Universität
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P290: Rate-based versus spike-triggered contributions in spike-timing–dependent synaptic plasticity
Tuesday July 8, 2025 17:00 - 19:00 CEST
P290 Rate-based versus spike-triggered contributions in spike-timing–dependent synaptic plasticity

Jakob Stubenrauch*1,2, Benjamin Lindner1,2

1Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany
2Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany

*Email: jakob.stubenrauch@rwth-aachen.de
IntroductionSpike-timing-dependent plasticity (STDP) has long been proposed as a phenomenological model class for synaptic learning [1], yet most theoretical frameworks of learning reduce plasticity to effectively rate-based descriptions. The short window of spike-pairs contributing to STDP of around 20ms [1] however points to the relevance of precise post-synaptic spike-responses. We investigate this timing-sensitive aspect of plasticity by dissecting synaptic dynamics into two contributions: spike-pairs that fall into the STDP window by rate-dependent coincidence versus those occurring through direct causation—a crucial distinction that reflects fundamentally different learning mechanisms.

MethodsWe develop a theoretical framework for the drift and diffusion of synaptic weights under STDP. We leverage established results [2,3] on the response of leaky integrate-and-fire (LIF) neurons, mean field theory of spiking networks [4], and recent advances in shot-noise theory [5]. Specifically, we derive a Langevin equation that describes the stochastic evolution of synaptic weights. This framework naturally subdivides the synaptic dynamics into rate-based and correlated contributions. The theory is applied to synapses that deliver Poissonian spikes into a recurrent network of LIF neurons, for which it captures per-realization the population mean and variance of the weights. The theory is tested against simulations of spiking neurons.
ResultsOur analysis quantifies and dissects the dynamics of synaptic weights. The contribution from correlated response—neglected in effectively rate-based descriptions—increases with the mean synaptic weight and becomes significant even at modest weights where ~20 concurrent input spikes are needed to reliably elicit action potentials. We apply the theory to characterize a supervised training paradigm mimicking memory consolidation. In this paradigm, the drift and diffusion derived by the theory capture the encoding strength and decay of memory traces and, more importantly, manage to attribute these to rate-based and correlation-dependent contributions, respectively.

DiscussionThe precise response of spiking neurons matters for plasticity if synaptic weights are large enough. As we demonstrate, this effect can have a large impact on the success or failure of associative learning. Based on our work, it is thus possible to judge under which circumstances STDP’s strong tuning to closely succeeding spikes is important. Correspondingly, when capturing the rate-based effects of STDP, one may overlook crucial aspects of learning. Future research should extend this approach to different neuron models, network architectures, and training paradigms. Results should be tested experimentally. Last, it would be of high interest to extend the framework to multiple populations and to recurrent plasticity.




Acknowledgements
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation),
SFB1315 B01, Project-ID No. 327654276 to B. L.
References
1.https://doi.org/10.1523/jneurosci.18-24-10464.1998
2.https://doi.org/10.1103/PhysRevLett.86.2186
3.https://doi.org/10.1103/PhysRevLett.86.2934
4.https://doi.org/10.1023/A:1008925309027
5.https://doi.org/10.1103/PhysRevX.14.041047
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P291: Weighted sparsity regularization for solving the inverse EEG problem: a case study
Tuesday July 8, 2025 17:00 - 19:00 CEST
P291 Weighted sparsity regularization for solving the inverse EEG problem: a case study

Ole Løseth Elvetun1 , Niranjana Sudheer*2

1Faculty of Science and Technology, Norwegian University of Life Sciences, P. O Box 5003, NO - 1432, Ås, Norway


2Faculty of Science and Technology, Norwegian University of Life Sciences, P. O Box 5003, NO - 1432, Ås, Norway

*Email: niranjana.sudheer@outlook.com

IntroductionWe present weighted sparsity regularization for solving the inverse EEG problem, which helps in the recovery of dipole sources while reducing depth bias. EEG is a non-invasive technique for monitoring cerebral activity. However, it suffers from ill-posed inverse problems due to weak signals from deep sources. Common standard regularization methods solutions have been suggested to tackle this problem but has significant spatial dispersion. This study proposes a redundant basis approach combined with a weighted sparsity term to improve the recovery and lower spatial dispersion, while reducing the depth bias.
MethodsOur approach is based on theoretical results established in previous studies, but modifications are required to align with the classical EEG framework [1,2]. Generally, any dipole at a particular location can be expressed as a combination of three basis dipoles with independent orientations. We will illustrate that employing more than three dipoles, specifically a redundant basis or frame, can enhance localization accuracy. We produce simulated event-related EEG data utilizing SEREEGA [3], an open-source MATLAB toolbox with 64, 131, and 228 electrode channels. Simulations with three different dipole orientations, such as fixed, limited, and free, are conducted, and performance is analyzed using dipole localization error (DLE), spatial dispersion (SD), and Earth Mover’s Distance (EMD) [3].
Results & DiscussionThe proposed method performs better than sLORETA and Lp-norm approaches with lower DLE values and reduced spatial dispersion. The frame-based methodology guarantees effective recovery of dipoles, especially in noise-free environments. We noticed that an increase in the number of frame dipoles resulted in reduced localization errors. The localization accuracy improves when the number of EEG channels is increased, particularly in the limited orientation setup. A real-world test using EEG Motor Movement data [4,5] showed the practical application of this approach.
ConclusionWeighted sparsity regularization provides an effective approach to EEG inverse problems, enhancing dipole localization and minimizing depth bias. The method is effective for various dipole orientations and adaptable for real-world applications.





Acknowledgements
I would like to thank my supervisor Ole Løseth Elvetun and co - supervisor Bjørn Fredrik Nielsen for providing guidance and support throughout the research. I am also grateful to my friends and family for their encouragement and support.
References1.https://doi.org/10.1515/jiip-2021-0057
2.https://doi.org/10.1090/mcom/3941
3.https://doi.org/10.1016/j.jneumeth.2018.08.001
4.https://doi.org/10.1109/TBME.2004.827072
5.https://doi.org/10.1161/01.CIR.101.23.e215









Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P292: Single-trial detection of lambda responses in free-viewing EEG measurements
Tuesday July 8, 2025 17:00 - 19:00 CEST
P292 Single-trial detection of lambda responses in free-viewing EEG measurements

Iffah Syafiqah Suhaili*1, Zoltan Nagy1,2, Zoltan Juhasz1
1Department of Electrical Engineering and Information Systems, University of Pannonia, 8200 Veszprem, Hungary
2Heart and Vascular Centre, Semmelweis University, 1085 Budapest, Hungary

*Email: ssiffah@phd.uni-pannon.hu
Introduction

Visual lambda responses are occipital activations evoked by saccadic eye movements. Their study is important for understanding visual processing during natural viewing conditions. Traditionally, lambda waves are detected by averaging many short epochs in which lambda responses are phase-locked to stimulus. In natural viewing conditions, especially in experiments where trials span many seconds, their detection is difficult, and averaged ERP/based methods are not applicable as saccades occur in an unpredictable, non-time-locked manner. This study presents a novel method that can detect individual lambda responses in single trials without averaging, allowing for more naturalistic experimental designs.


Methods
80 art paintings were presented to 29 healthy volunteers. Each painting was displayed for 8 seconds in a random order, each followed by a 4-second blank screen. 128-channel EEG data were registered using a Biosemi ActiveTwo EEG device. Participants were instructed to explore the painting and then respond by pressing a LIKE or DISLIKE button. After high-pass (1 Hz) and low-pass (40 Hz) filtering, the signals were decomposed into independent components using the Infomax Independent Component Analysis (ICA) method[1]. Simultaneously, eye movements were recorded with a Tobii Pro Fusion eye-tracker at 250 Hz sampling rate. As the final step of the pre-processing, the EEG and eye-tracking data were synchronized.

Results
Besides the usual eye-related artefact components (horizontal and vertical eye movements), ICA decomposition produced a characteristic component displaying a distinct, rhythmic pulse-train pattern during the 8-second viewing period that diminished in the 4-second blank interval. The location of this brain source component was on the parieto-occipital electrodes (Pz –Oz). Overlaying the eye-tracking events (saccade onset and offset) on the ICA activation plot clearly shows that the pulses are time-locked to the saccade offsets, with an average latency of 82 ms. Fig. 1 illustrates these findings in detail.

Discussion
ICA can reliably detect saccade-related lambda waves in free-viewing experiments lasting at least 15 minutes. This method helps determine the number and temporal distribution of saccades characterizing perceptual behaviour (e.g. engagement, attention) in natural viewing experiments. Lambda wave properties (peak amplitude, peak latency, inter-peak distance) allow further quantitative analysis and can act as synchronization markers in segmenting sessions into saccade-evoked epochs locked to lambda peaks. Identifying the lambda component improves eye-movement artefact removal by including parieto-occipital activations. We hope this method will lead to new experimental approaches that advance our understanding of the human visual system.






Figure 1. ICA results highlighting saccade-related lambda waves. a) ICA activation plot of two stimulus-locked paintings (epoch 22 and 23) highlighting the lambda response component (IC 3) occur only during the stimulus presentation. b) Scalp topography map of IC 3 over parieto-occipital region. c) A zoomed-in single trial segment circled in (a), displaying three lambda peaks aligned with saccade events.
Acknowledgements
This research was funded by the University Research Fellowship Programme (EKOP) (Code: 2024-2.1.1-EKOP-2024-00025/58) of Ministry of Culture and Innovation from the National Fund for Research, Development and Innovation.
References
1.Lee, T.-W., Girolami, M., & Sejnowski, T. J. (1999). Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources.Neural Computation,11(2), 417–441. Neural Computation. https://doi.org/10.1162/089976699300016719
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P293: Numerical Analysis of AP Propagation Parameter Thresholds Under Varied Space and Time Discretization
Tuesday July 8, 2025 17:00 - 19:00 CEST
P293 Numerical Analysis of AP Propagation Parameter Thresholds Under Varied Space and Time Discretization

Lucas Swanson*1, Erin Munro Krull1, Laura Zittlow1

1Mathemarical Sciences Department, Ripon College, Ripon, WI, US

*Email: swansonl@ripon.edu
Introduction

There is a lot known about how discretization affects the numerical solution to PDEs. However, little is known about how these discretization affects finding a parameter threshold for a PDE. In particular, there is the sodium conductance propagation threshold (gNaT), which is the the threshold for AP propagation when varying g̅_Na. Preliminary results show that this threshold, if known for simple morphologies, may be used to predict the gNaT of other, more complex morphologies.
Methods
We modeled cells using the Hodgkin-Huxley type model with parameters for a rat neocortical L5 pyramidal cell axon [1], on the NEURON software. Using a binary search, we were able to calculate the gNaT of any morphology from a given stimulus to a given AP propagation test site. We explored the effects of the discretization parameters, for time and space,dtanddx, on the gNaT of 10 randomly generated morphologies. We varieddtfrom 2⁻⁸ms to 2⁻⁵ms, anddxfrom 2⁻⁸λto 2⁻⁴λ, whereλis the electrotonic length.
Results
Our results show that increaseddtleads to increased gNaT values, regardless of morphology anddx; and that increasingdxcan cause gNaT values to diverge sporadically, especially in morphologies with short, or tightly spaced branches.
Discussion
Further investigation should be done to find the true nature ofdx’s effects on gNaT, since the sporadic divergence of gNaT seen in our results could be attributed to the locations of branches being re-discretized, and/or short branches having significantly different behaviors. That is, our results show that the accuracy of calculated parameter thresholds may be linked to morphology.




Acknowledgements
I would like to thank the faculty of the Ripon College math department, which includes my mentor for this project, Professor Erin Munro Krull, all of whom gave me advice and counsel. I would also like to thank the organizers of Ripon College's Summer Opportunities for Advanced Research (SOAR) program, as well as the many donors of the college who helped fund the program.
References
● Traub, R. D., Contreras, D., Cunningham, M. O., Murray, H., LeBeau, F. E., Roopun, A., Bibbig, A., Wilent, W. B., Higley, M. J., & Whittington, M. A. (2005). Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts.Journal of neurophysiology,93(4), 2194. doi.org/10.1152/jn.00983.2004


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P294: Inhibitory-Targeted Plasticity in Developing Thalamocortical Networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P294 Inhibitory-Targeted Plasticity in Developing Thalamocortical Networks

Matthew P. Szuromi*1,2, Gabriel K. Ocker2,3

1Graduate Program for Neuroscience, Boston University, Boston, USA
2Department of Mathematics and Statistics, Boston University, Boston, USA
3Center for Systems Neuroscience, Boston University, Boston, USA

*Email: mszuromi@bu.edu

Introduction

The maturation of thalamocortical (TC) afferents is a key feature of critical periods (CPs) for primary sensory cortices [1]. Bienenstock-Cooper-Munro (BCM) theory for synaptic plasticity has been effective in describing thalamic projections onto pyramidal (Pyr) neurons in layer 4 (L4) of primary visual cortex (V1) [2]. However, these models often consider only a homogeneous population of cortical neurons, neglecting the recurrent connectivity within cortex and the various cell types innervated by TC axons, such as parvalbumin+ (PV+) interneurons. To address this, we develop an excitatory-inhibitory thalamocortical network model equipped with triplet BCM spike-timing-dependent plasticity (STDP) and rigorously describe its dynamics.
Methods
Our model comprises three neuronal populations: cortical excitatory (E), cortical inhibitory (I), and thalamic (X), the latter of which can have correlated spike trains. Neurons are modeled as a mutually exciting Hawkes process [3]. We examine systems whereXtoE,XtoI, andEtoIsynapses can be plastic and update according to a triplet BCM STDP rule [4, 5]. Using standard separation of timescales, we derive dynamics for the mean interpopulation synaptic weights in terms of moments of the neural activity, calculated by the path integral formalism [6, 7, 8, 9]. We then apply numerical methods to assess how parameters (static weights, correlations, and STDP parameters) affect the stability and strength of the interpopulation weights.
Results
When only TC synapses are plastic, TC weights strengthen in response to increased thalamic correlations. Further, we find that corticocortical inhibition must be sufficiently strong (i.e., the ratio of the meanItoEweight to the meanEtoIweight must be sufficiently large) for bothXtoEandXtoIweights to stabilize at nonzero values. Additionally, we analyze the network whenEtoIsynapses are also plastic. We determine how parameters of the STDP rule and the network influence the trajectories and equilibria of the synaptic weight dynamics in response to varied thalamic correlations. Particularly, we describe regimes where the trajectory of the meanEtoIweight either mimics or opposes the trajectories of the TC weights.
Discussion
In this work, we extend models using triplet BCM STDP to excitatory-inhibitory networks. In L4 of V1, inhibitory synapses from PV+ interneurons onto Pyr neurons strengthen prior to the CP [10]. Our results suggest a possible explanation: strong inhibitory synapses are necessary for TC synapses to potentiate and stabilize. Experiments have also indicated that during the CP for V1, visual deprivation induces simultaneous TC depression and potentiation of Pyr to PV+ synapses [11]. Our results describe parameter regimes where this phenomenon can occur, suggesting potential plasticity rules for synapses onto PV+ cells during the CP.



Acknowledgements
M.P.S. acknowledges the Neurophotonics Center at Boston University for their support.
References
1. https://doi.org/10.1016/j.neuron.2020.01.031
2. https://doi.org/10.1038/381526a0
3. https://doi.org/10.1093/biomet/58.1.83
4. https://doi.org/10.1523/JNEUROSCI.1425-06.2006
5. https://doi.org/10.1073/pnas.1105933108
6. https://doi.org/10.1103/PhysRevE.59.4498
7. https://doi.org/10.1093/cercor/bhy001
8. https://doi.org/10.1371/journal.pcbi.1005583
9. https://doi.org/10.1103/PhysRevX.13.041047
10. https://doi.org/10.1523/JNEUROSCI.2979-10.2010
11. https://doi.org/10.7554/eLife.38846
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P295: How baseline activity determines neural entrainment by transcranial alternating current stimulation (tACS) in recurrent inhibitory-excitatory networks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P295 How baseline activity determines neural entrainment by transcranial alternating current stimulation (tACS) in recurrent inhibitory-excitatory networks

Saeed Taghavi*1,2, Gianluca Susi1, Alireza Valizadeh1,2,Fernando Maestú1

1Zapata-Briceño Institute of Neuroscience, Madrid, Spain
2Physics Department, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran

*Email: saeed.taghavi.v@gmail.com


Introduction
Neuronal oscillations play a key role in cognition and can be modulated by transcranial alternating current stimulation (tACS). However, the mechanisms underlying network-level entrainment remain unclear. We investigate how a balanced excitatory-inhibitory network of adaptive exponential integrate-and-fire neurons responds to sinusoidal stimulation. We analyze phase-locking to determine how external rhythmic inputs influence neural synchronization at different baseline network states.
Methods
We simulate a recurrent EI network that receives Poisson-distributed background input. Three baseline synchronization levels are studied, reflecting the degree of natural synchronization in neuronal activity within the network before any external stimulation is applied. Additionally, tACS-like stimulation is applied at frequencies ranging from 5 to 60 Hz with five different amplitudes (3, 4, 6, 8, and 10). Each condition is repeated over nine trials to ensure reliability. To quantify network entrainment, we compute the phase-locking value between the population activity and the stimulation. Furthermore, we calculate the spike-field coherency of individual neurons and measure changes in SFC with and without stimulation to assess how neuronal firing aligns with the external signal.
Results
Our results show that baseline network synchrony strongly influences entrainment. Networks with higher intrinsic synchrony exhibit stronger phase locking with the stimulation. When the stimulation frequency is close to the endogenous frequency, PLV increases with stimulation amplitude, suggesting that stronger inputs enhance entrainment only when the stimulation frequency matches the endogenous frequency. Frequency-dependent effects emerge, with the most robust responses occurring near the network’s intrinsic oscillation frequency. Individual neurons display varying phase coherence, with some aligning strongly to the stimulation while others remain weakly affected.
Discussion

We discovered that tACS-induced neural entrainment behaves in a way that challenges conventional expectations. While you might assume higher baseline synchrony leads to broader entrainment, we found the opposite. Networks with low baseline synchrony actually exhibit broader locking across a wider range of external frequencies. Conversely, highly synchronized networks show stronger locking, but it's tightly confined to the baseline frequency's vicinity. This counterintuitive result underscores the delicate balance between baseline synchrony and tACS effectiveness, highlighting the need for nuanced approaches in cognitive and therapeutic applications.



Figure 1. (a) Entrainment of population activity to tACS varies with network synchrony and stimulation strength. Higher synchrony or amplitude increases peak PLV but narrows the high-PLV region. (b) Stimulation does not significantly alter firing rates but enhances phase coherence. (c) Spike phase coherence changes shows a peak when stimulation matches network frequency.
Acknowledgements


References
[1]https://doi.org/10.1016/j.heliyon.2024.e41034
[2]https://doi.org/10.1101/2023.05.19.541493
[3]https://doi.org/10.1016/j.neuroimage.2022.118953
[4]https://doi.org/10.3390/biomedicines10102333
[5]https://doi.org/10.3389/fnsys.2022.827353



Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P296: The modelling of the action potentials in myelinated nerve fibres
Tuesday July 8, 2025 17:00 - 19:00 CEST
P296 The modelling of the action potentials in myelinated nerve fibres
K. Tamm 1*, T. Peets 1, J. Engelbrecht 1,2

1Tallinn University of Technology, Department of Cybernetics, Tallinn, Estonia
2Estonian Academy of Sciences, Tallinn, Estonia
*kert.tamm@taltech.ee


Introduction. The classical Hodgkin-Huxley (HH) model [1] describes the propagation of an axon potential (AP) in unmyelinated axons. In many cases, the axons have a myelin sheath. A theoretical model is proposed describing the AP propagation in myelinated axons drawing inspiration from the studies of Lieberstein [2], who included the possible effect of inductance. The Lieberstein-inspired model (in the form of coupled partial differential equations (PDEs)) can describe all the essential effects characteristic to the formation and propagation of an AP in an unmyelinated axon. Then a phenomenological model for a myelinated axon is described including the influence of the structural properties of the myelin sheath and the radius of an axon.

Methods. The model equations are solved numerically by making use of the pseudospectral method (PSM) [3]. Briefly, the main point of PSM is that the discrete Fourier transform (DFT) can be used to approximate space derivatives reducing, therefore, the PDE to an ordinary differential equation (ODE) and then to use standard ODE solvers for integration in time. Here the ODE solver is used through its NumPy (a Python package) implementation. The parameters for the model are collected from experiments (most of them from the classical HH paper [1]) or estimated separately based on experimental observations.

Results. Using the parameters from the experiments we investigated the numerical solutions of the noted model for the unmyelinated axon and demonstrated that the behaviour of the solutions is in the physiologically plausible range and the key characteristics of the nervous signalling are fulfilled (annihilation of counter-propagating signals, threshold, refractory period). The model includes the structural properties of the myelin sheath: the μ-ratio (longitudinal geometry) and g-ratio (perpendicular geometry). The key difference between the classical HH model and the Lieberstein-inspired model used here is that the mechanism for signal propagation along the axon emerges like a wave as a consequence of opting to keep the inductivity.

Discussion. The goal of constructing yet another equation for describing the AP propagation along the axon is clearer physical interpretation as we start from the elementary form of Maxwell equations which is modified to include the influence of myelination on the signal propagating along the axon. It is important to stress that the proposed continuum-based model is philosophically similar to how the transmission line equations are composed. The ‘unit-cell’ in the context of the myelinated axon in the model is composed of the node of Ranvier and the myelinated section next to it. Having a pair of PDEs with a straightforward connection to underlying physics could be useful for investigating causal connections in the context of nerve signalling.


Acknowledgements
This research was supported by the Estonian Research Council (PRG 1227). Jüri Engelbrecht acknowledges the support from the Estonian Academy of Sciences.
References
[1] https://doi.org/10.1113/jphysiol.1952.sp004764
[2] https://doi.org/10.1016/0025-5564(67)90026-0
[3] https://doi.org/10.1007/978-3-030-75039-8


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P297: Autonomous Generation of Neuronal Connection by Axon Guidance Factors
Tuesday July 8, 2025 17:00 - 19:00 CEST
P297 Autonomous Generation of Neuronal Connection by Axon Guidance Factors

Atsuya Tange*1, Shun Ogawa1, Minoru Owada1, Yuko Ishiwaka1


1SoftBank Corp., Tokyo, Japan

*Email: atsuya.tange01@g.softbank.co.jp

Introduction

Humans’ robust cognitive and linguistic functions emerge from intricate connections among numerous neurons in the neocortex. To implement machine learning model performing such a highly cognitive tasks, clarifying theoretically mechanism for generating such a network connection is an important challenge. To address this challenge, we propose an autonomous neuron connection model inspiring biological neuronal growth mechanisms [1]. This work focuses on axon elongation and creation of the synaptic connections between source and target neurons. We believe that this approach will improve our understanding of neural connectivity and it creates the brain’s initial state for life-long learning [2].

Methods
An implemented model has two types of axon guidance factor sources, the attractant and repellant ones, in a 2D space. The model is based on the self-propelled particles (SPPs) [3] and the XY model [4]. The growth direction is along the gradient of extracellular guidance factors and it subjects to the noise. A tip of the axon is regarded as an SPP and it observes only local information such as the concentration itself and its gradient around them. This process includes axon branching. The axon branching occurs with probability and it creates another SPP. They move along the gradient field avoiding each other to prevent the axon overlapping. They finally reach the dendrites of the target cells and create the synapse.
Results
Figure 1 exhibits snapshots of the simulation result. The black, green, and red circles are source neurons, target neurons and repulsion sources, respectively. The axon branch of each source neuron succeeds in finding the dendrites of the target neurons under the environment.
The simulations are performed by using Python (Fig. 1) and our original brain simulation framework, Bramuwork (P48 in [5]), and obtain similar results. Bramuwork organizes graphs in a database. The node stores attributes and methods (programs) that define its dynamics. The edge connects nodes, arranging the network connectivity and supporting the hierarchical structure. Nodes and edges are used for modeling somas, dendrites, axons, and repellent factor sources.
Discussion
We note that the SPPs in the model can only observe local information, such as the density and its gradient of the chemical substances, and do not use global information. Up to now, the gradient field induced by the chemical substances does not depend on time. But the diffusion or transfer occurs during the process; it may impact the created neuronal network. We have to include these phenomena without violating causality. The proposed 2D model could be generalized to a 3D one by replacing the XY spin interaction with the spherical ones.

Bramuwork enables us to modify and examine models during running time. Neurons and connections can be created and deleted during simulation, and users can search and extract subsets of data for analysis.



Figure 1. Axons elongation under axon guidance environment
Acknowledgements

References
[1] https://doi.org/10.1126/science.274.5290.1123
[2] https://doi.org/10.1038/s42256-022-00452-0
[3] https://doi.org/10.1103/PhysRevLett.75.1226
[4] https://doi.org/10.1093/acprof:oso/9780199577224.001.0001
[5] https://doi.org/10.1007/s10827-022-00841-9
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P298: The important role of astrocytic Na+ signaling in the astrocyte-neuron communication
Tuesday July 8, 2025 17:00 - 19:00 CEST
P298 The important role of astrocytic Na+ signaling in the astrocyte-neuron communication

Pawan K Thapaliya1, Alok Bhattarai1, and Ghanim Ullah1,*
1Department of Physics, University of South Florida, Tampa, FL 33620, USA.
*Email: gullah@usf.edu

Introduction

Emerging evidence indicates that neuronal activity-evoked changes in Na+concentration in astrocytes ([Na]a)represent a special form of excitability, which is tightly linked to all other major ions in the astrocyte and extracellular space, as well as to bioenergetics, neurotransmitter uptake, and neurovascular coupling. Furthermore, [Na]aexhibits significant heterogeneity at the subcellular, cellular, and brain region levels.

Methods
We develop biophysical models to determine how [Na]acan regulate astrocytic function. We further investigate what does the spatial heterogeneity of [Na]aat different scales mean for the astrocyte-neuron communication. Our models are supported by extensive data imaging Na+ signals in astrocytes under different conditions.
Results
Our work highlights the importance of [Na]ain almost every aspect of astrocytic function. For example, we have shown that the observed brain-region specific heterogeneity in [Na]asignaling leaves cortical astrocytes more susceptible to Na+and Ca2+overload under metabolic stress as compared to hippocampal astrocytes. The model also predicts that activity-evoked [Na]atransients result in significantly higher ATP consumption in cortical astrocytes than in the hippocampus. The difference in ATP consumption is mainly due to the different expression levels of NMDA receptors in astrocytes in the two brain regions [1]. The model also closely reproduces the dynamics of extra- and intracellular pH under different conditions [2]. Furthermore, in conjunction with experimental data our models also reveal that Na+ concentration varies across the cellular compartments, from one cell to another, and across brain regions.

Discussion
Overall, this study emphasizes the significance of incorporating Na+ homeostasis in computational models for neuro-astrocytic coupling, specifically when studying brain (dys)function under metabolic stress. Our study also highlights that by using different Na+concentrations, how different astrocytes can differentially regulate the function of different neurons or different synapses emanating from the same neuron.



Acknowledgements
This work is supported by the National Institutes of Health through grant number R01NS130916.
References
[1] Thapalia P, Pape N, Rose CR, and Ullah G (2023), Modeling the heterogeneity of sodium and calcium homeostasis between cortical and hippocampal astrocytes and its impact on bioenergetics, Front. Cell. Neurosci., 17, 1035553.

[2]Everaerts K, Thapaliya P, Pape N, Durry S, Eitelmann S, Ullah G, and Rose CR (2023), Inward Operation of Sodium-Bicarbonate Cotransporter 1 Promotes Astrocytic Na+ Loading and Loss of ATP in Mouse Neocortex during Brief Chemical Ischemia, Cells, 12, 2675.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P299: iCSD may produce spurious results in dense electrode arrays
Tuesday July 8, 2025 17:00 - 19:00 CEST
P299 iCSD may produce spurious results in dense electrode arrays

Joseph Tharayil*1,2,Esra Neufeld2, Michael Reimann1,3
1 Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL) Campus Biotech, Geneva, Switzerland
2Foundation for Research on Information Technologies in Society (IT'IS), Zurich, Switzerland
3Open Brain Institute, Lausanne, Switzerland

*Email: tharayiljoe@gmail.com
Introduction

Estimation of the current source density (CSD) is a commonly-used method for processing and interpreting local field potential (LFP) signals by estimating the location of the neural sinks and sources that give rise to the LFP. However, recentin vivoexperiments using dense electrode arrays have found surprising CSD patterns, with high-spatial-frequency oscillations between current sinks and sources [1].

Methods
By analytically computing the contribution of a two-dimensional Gaussian current source centered on an electrode array to the CSD as a function of array density, the width of the current source, and the location of the current source (using the standard CSD method [2]), we show that spurious results mistaking true sources for sinks and vice versa are obtained when the inter-electrode spacing is small relative to the current distribution width (Fig. 1a).
To study the practical relevance of this issue, we simulated LFP recording in a detailed model of rat cortex (200’000 morphologically detailed neurons, (Fig. 1b) [3]). We estimate CSD from these recordings using the inverse CSD (iCSD) [4] method, and, for a variety of electrode densities and CSD estimation parameters, compare the results to the ground-truth current distribution and to the “non-negative” CSD, a metric similar to standard CSD method but which ignores regions where confounding of sources and sinks occur.
Results
With high-density arrays, our model of rat cortex produces the same high-spatial-frequency oscillation between sinks and sources observed in [1] (Fig. 1c.i-iv). As array density increases, the correlation between iCSD and ground-truth current density decreases (Fig. 1c). Modifying iCSD parameters improves the correlation, but the correlation between ground-truth CSD and non-negative CSD is consistently better than the correlation between ground-truth CSD and iCSD.

Discussion
Our results indicate that the high-spatial-frequency oscillations observed inin vivoCSD calculated using high-density electrode arrays are likely due to confusion between sinks and sources. This confusion occurs because the assumption underlying CSD estimation — that current sources are homogeneous over some radius in the plane perpendicular to the electrode array — is not satisfiedin vivo. While more accurately specifying this radius parameter does improve the CSD estimate, there is no value which results in a better correlation between iCSD and ground-truth than the correlation between non-negative CSD and ground truth, suggesting that the true CSD is not homogeneous at any scale.




Figure 1. Fig. 1: a: A positive current source can produce a negative CSD contribution. b: Model of rat cortex (from [3]). c: Comparison of iCSD and objective CSD for various array spacings.
Acknowledgements
References
[1]http://dx.doi.org/10.7554/eLife.97290
[2]https://doi.org/10.1016/0165-0270(88)90056-8
[3]https://doi.org/10.1101/2023.05.17.541168

[4]https://doi.org/10.1016/j.jneumeth.2005.12.005
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P300: The neuron-synapse dynamics control correlational timescales in the developing visual system
Tuesday July 8, 2025 17:00 - 19:00 CEST
P300 The neuron-synapse dynamics control correlational timescales in the developing visual system

Ruben A. Tikidji-Hamburyan1*, Matthew T. Colonnese1
1School of Medicine. and Health. Sciences, the George Washington Univ., Washington, D.C., USA

*Email: rath@gwu.edu


Introduction: During early development, the retinal spontaneous wave-like activity provides positional (spatial) information, encoded in coarse-grained (>100 ms) inter-neuron spike correlations [1], needed for the refinement of retinothalamic, but also for thalamocortical and intracortical connections. The formation of sub- and cortical networks goes in parallel with the refinement of the retinothalamic connections, therefore, spatial information must be transferred by an unrefined, imprecise thalamic network. Thalamocortical relay neurons (TCs) receive 10 to 20 inputs from neighboring ganglion cells at this age [2,3] , which should cause fast (<100ms) timescale correlations in TCs firing. Here, we model how these correlational timescales are regulated.
Methods: TC neurons were simulated as two compartments (dendrosomatic:NaF, KDr, NaP, CaL, CaT, KA, SK, H currents and Ca2+ dynamics and axonal:NaF,KDr) conductance-based model derived from an adult [4]. The parameters were fitted to reproduce the dynamics of mouse TCs recorded at postnatal day 7 (P7) by genetic algorithms with nondominated sorting [5] and with Krayzman’s adaptive multiobjective optimization [6]. The network model consists of 120 TC neurons activated by spikes of retinal ganglion cells (rGCs) recorded ex-vivo at P6-P9. The probability of connections and the synaptic weights are modeled as Gaussian dependence on the distance. Each synapse was modeled by 2-stages: presynaptic depression and postsynaptic NMDAR and AMPAR currents [2,7].
Results: We show that with synaptic convergences observed at P7, either adult neuronal dynamics or synaptic current composition causes fast timescale correlations and a dramatic decrease in spatial information encoded in TC spikes. Therefore, we call them “parasitic” correlations. However, parasitic correlations are suppressed independently of convergence if the model replicates P7 neuronal dynamics and dominance of slow NMDAR currents - the landmark property at this age [3]. Moreover, the interplay between neuron and synaptic dynamics suppresses only parasitic correlation, keeping informative slow timescale correlations intact. In contrast, parasitic correlations are negligible in networks with adult convergence and don’t need to be suppressed.
Discussion: Our results suggest that developing neurons regulate their membrane and synaptic dynamics to preserve information critical for proper circuit formation by suppressing non-informative parasitic correlations. As we showed, parasitic correlations can be invariantly suppressed, while informative correlation passes through an unrefined and imprecise network. Our modeling opens critical general questions: how are correlations transferred, and how does a network regulate correlational timescales? The answers to these questions go beyond just neuronal excitability, as for synchrony transferring [8], and require synergetic regulation of both neuronal and synaptic dynamics.



Acknowledgements
This work was supported by R01EY022730 and R01NS106244
References
● 10.1523/JNEUROSCI.19-09-03580.1999
● 10.1002/cne.22223
● 10.1016/S0896-6273(00)00166-5
● 10.1371/journal.pcbi.1006753
● 10.1109/4235.996017
● 10.7554/eLife.84333
● 10.1523/JNEUROSCI.4276-07.2008
● 10.1016/j.neuron.2013.05.030


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P301: A shared algorithmic bound for human and machine performance on a hard inference task
Tuesday July 8, 2025 17:00 - 19:00 CEST
P301 A shared algorithmic bound for human and machine performance on a hard inference task

Daniele Tirinnanzi*1, Rudy Skerk1, Jean Barbier1,2, Eugenio Piasini1

1International School for Advanced Studies (SISSA), Trieste, Italy
2International Centre for Theoretical Physics, Trieste, Italy

*Email: dtirinna@sissa.it

Introduction

Recently, a successful approach in neuroscience has been to train deep nets (DNs) on tasks that are behaviorally relevant for humans or animals, with the goal of identifying emerging patterns in the implementation of key computations [1, 2], or to formulate compact hypotheses for physiological and perceptual phenomena [3, 4]. However, less attention has been given to the comparison of the limitations on the space of algorithms that are accessible to human cognition and DNs, as a method to generate (rather than test) hypotheses on shared architectural or learning constraints. Here we compare the performance of humans and DNs on the planted clique problem (PCP), a well-studied abstract task with known theoretical performance bounds [5, 6].
Methods
The PCP consists in detecting a set of K interconnected nodes (a “clique”) in a random graph of N nodes. We represent graphs as adjacency matrices and analyze performance across different N values. Four DNs are trained and tested on a binary classification task at 9 N values: a multilayer perceptron (MLP), a convolutional neural network (CNN) and two Visual Transformers [7], one pretrained (ViTpretrained) and one trained from scratch (ViTscratch). Fifteen human subjects perform a two-alternative forced choice task at 2 N values, selecting which of two presented graphs contains the clique. For each N, we measure accuracy for varying K values and fit a sigmoid to extract the clique detection threshold (K₀), used to compare agent performance.
Results
As shown in Figure 1, the CNN exhibits the lowest K₀ (highest clique detection sensitivity) at all N values except N = 200, 300 and 400. At these N values, the CNN performs poorly, making it impossible to estimate K₀. At all N values, the ViTpretrained and ViTscratch perform similarly, while the MLP consistently shows the lowest sensitivity, except at N = 100. Human performance in the task is comparable to that of DNs, with sensitivity at N = 300 closely matching that of the ViTpretrained and ViTscratch. Performance of all agents, both biological and artificial, falls far from the theoretical bounds of the problem.
Discussion
Our results show that different DNs achieve comparable performance in the PCP. This performance level, far from the problem’s theoretical bounds, is also observed in humans, suggesting a shared algorithmic limit between artificial and biological agents. Large-scale human experiments will help further characterize this threshold across all N values.
With its well-defined bounds, the PCP provides a novel framework for investigating the space of algorithms accessible to humans and DNs in simple visual inference tasks. Such interdisciplinary efforts - combining theoretical, computational, and behavioral perspectives - are essential for deepening our understanding of intelligence in both artificial and biological systems [8, 9].



Figure 1. clique detection thresholds (K₀, log-scaled, y axis) as a function of the number of nodes (N, x axis) for humans (pink triangles) and DNs (MLP: red dots; ViTpretrained: dark green dots; ViTscratch: purple dots; CNN: light green dots). The green and the yellow lines indicate the statistical [5] and computational [6] bounds, respectively.
Acknowledgements
The HPC Collaboration Agreement between SISSA and CINECA granted access to the Leonardo cluster. DT is a PhD student enrolled in the National PhD program in Artificial Intelligence, XXXIX cycle, course on Health and life sciences, organized by Università Campus Bio-Medico di Roma.
References
● https://doi.org/10.1038/nn.4244
● https://doi.org/10.48550/arXiv.1803.07770
● https://doi.org/10.1038/s41593-019-0520-2
● https://doi.org/10.1016/j.cub.2022.12.044
● https://doi.org/10.1017/S0305004100053056
● https://doi.org/10.48550/arXiv.1304.7047
● https://doi.org/10.48550/arXiv.2010.11929
● https://doi.org/10.1017/S0140525X16001837
● https://doi.org/10.1038/s41593-018-0210-5


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P302: Is the Cortical dynamics ergodic?
Tuesday July 8, 2025 17:00 - 19:00 CEST
P302 Is the Cortical dynamics ergodic?

Ferdinand Tixidre*1, , Gianluigi Mongillo2,3, Alessandro Torcini1
● Laboratoire de Physique Théorique et Modélisation, CY Cergy Paris Université, Cergy-Pontoise, France
● School of Natural Sciences, Institute for Advanced Study, Princeton, NJ, USA.
● Institut de la Vision, Sorbonne Université, Paris, France.


*Email: ferdinand.tixidre@cyu.fr

Introduction

Cortical neuronsin vivoshow significant temporalvariability in their spike trains even in virtually-identicalexperimental conditions. This variability is partly due to the intrinsic stochasticity of the spike-generation. To accountforthe levels of variabilityobserved, one needs to assume additional fluctuations in the activity over longer timescales [1, 2]. But what is their origin? One theory suggest they result from non-ergodic network dynamics [3] arising from partially-symmetric synaptic connectivity, consistently with anatomical observations [4]. However it is unclear, if such ergodicity breakingoccurs in networks of spiking neurons, due to fast temporal fluctuations in the synaptic inputs[5].


Methods
To address these questions, we study sparsely-connected networks of inhibitory leaky-integrate-and-fire neurons with arbitrary levels of symmetry, q, in the synaptic connectivity. The connectivity matrix ranges from random (q=0) to fully symmetric (q=1). Neurons also receive a constant excitatory drive, balanced by recurrent synaptic inputs. To assess ergodicity, we estimate single-neuron firing rates over increasing time intervals, T, starting from different initial membrane voltage distributions (for the same network). If the dynamics is ergodic, the difference, D, between estimates from different initial conditions should approach zero as 1/T for large T.

Results
This is, in fact, what happens in random networks (i.e., q = 0; Fig. 1(a)). In partially-symmetric networks (q >0), the onset of the "ergodic" regime occurs at longer and longer times. The situation becomes dramatic for the fully symmetric network (q= 1), where D does not decay even for time windows that are 5 order of magnitudes longer than the membrane time constant as shown in Fig 1(a); the network dynamics is non-ergodic, at least in a weak sense. In this regime, the network activity is sparse, with a large fraction of almost-silent neurons, and the auto-covariance function of the spike trains exhibits long time scales (Fig. 1 (b)). Both these features are also routinely observed in experimental recordings [6,7]

Discussion
Taken together, our results provide support to the idea that many features of cortical activity can be parsimoniously explained by the non-ergodicity of the network dynamics. In particular, in this regime, the activity level of the single neurons can significantly change depending on the “microscopic" initial conditions (which are beyond experimental control) (Fig. 1 (c-d)), providing a simple explanation for the large trial-to-trial fluctuations observed in the experiments.





Figure 1. (a) D as a function of time for different values of q: q=0 (blue); q=0.5 (green); q=0.8 (orange); q=0.9 (red); q=0.95 (brown); q=1.0 (black). (b) Auto-correlation of synaptic currents for different q. (c-d) Cumulative firing rate of a single neuron for q=0.8 (c) and q=0.9 (d). Shades of the main color represent different replicas. The insets show the instantaneous firing rate of the same neuron.
Acknowledgements
F.T. and A.T. received financial support by the Labex MME-DII (Grant No. ANR-11-LBX-0023-01) and by CY Generations
(Grant No ANR-21-EXES-0008). G.M. work is supported by grants ANR-19-CE16-0024-01 and ANR-20-CE16-0011-02 from the French National Research Agency and by a grant from the Simons Foundation (891851, G.M.).


References
● https://doi.org/10.1016/j.neuron.2010.12.037
● https://doi.org/10.1167/18.8.8
● https://www.biorxiv.org/cgi/content/short/2022.03.14.484348
● https://doi.org/10.1126/science.abj5861
● https://doi.org/10.1038/s41598-019-40183-8
● https://doi.org/10.1007/s00359-006-0117-6
● https://doi.org/10.1038/nn.3862


Speakers
avatar for Alessandro TORCINI

Alessandro TORCINI

Professor, CY Cergy Paris Universite'
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P303: Sleep-like Homeostatic and Associative Intra- and Inter-Areal Plasticity Enhances Cognitive and Energetic Performance in Hierarchical Spiking Network
Tuesday July 8, 2025 17:00 - 19:00 CEST
P303 Sleep-like Homeostatic and Associative Intra- and Inter-Areal Plasticity Enhances Cognitive and Energetic Performance in Hierarchical Spiking Network

Leonardo Tonielli1, Cosimo Lupo1, Elena Pastorelli1, Giulia De Bonis1, Francesco Simula1, Alessandro Lonardo1, Pier Stanislao Paolucci1

1Istituto Nazionale di Fisica Nucleare, Sezione di Roma
Introduction

Can hierarchical bio-inspired AI spiking networksand biological brainsengaged in incremental learning benefit from unsupervised plasticity during an offline deep-sleep-like period? We show that simultaneous intra- and inter-areal plasticity enhances the cognitive and energetic benefitsofdeepsleep-likeinathalamo-cortical modelinspired bythecorticalorganizing principle[1]andthehomeostatic-associative sleephypothesisas in[2, 3],that learns,retrievesand classifies handwritten digits from few examples. This outperformsresultspresented in[4], where deep sleep is limited to cortico-cortical plasticity.
Methods
The network is a two-areaspiking model (Fig1A) using Integrate-and-fire neurons with spike-frequency adaptation. Each layer is composed of excitatory-inhibitory populations. The input consists of MNIST images preprocessed with a HOG filter[5](30 training, 250 test). The perceptual stream is released from the thalamusand propagates through plastic feedforward connections to cortex, which encodes memories within neural assemblies elicited by specific contextual stimuli. Sleep-like dynamics is stimulated by non-specific cortical noise generating slow-oscillation activity that promotes memoryreplayand thusconsolidateslearning through homeostatic and associative processes within cortical synapses and thethalamo-cortical loop.
Results
We assessed the cognitive and energetic performance of the network by measuring the most-active neuron’sclassification accuracy (Fig1B),thenetwork’s mean firing rate (C) and the synaptic change (C, D) over 2000 seconds of sleep. We comparedthalamo-cortical plastic sleep with cortico-cortical plasticityonly. Our findingsindicatethat fullthalamo-cortical plasticity strongly enhances classification performance (B) and firing rate downscaling (C) while preserving the same associative-homeostaticbehaviourat the cortico-cortical synaptic level (D). Specifically, weobserveda significant 5% improvement in classification accuracy and a 25% reduction in firing rate, enabling the network to classify better by consuming less energy.
Discussion
We proposed a minimalthalamo-cortical model that classifies images drawn from the MNIST set of handwritten digits,capable of improving cognitive performance by homeostatic-associative cortical plastic deep-sleep-like activity. While cortical sleep is important to normalize high level representations and to develop new synapses, our new results suggest thatthalamo-cortical sleep is fundamental to coordinate cortical activation and to regulateits waking activity. This effect might be beneficial also to deep neural networkalgorithms which lack this generalizationfeature,andit’salsorelevant for cerebral neural networks.



Figure 1. Solid lines: full plasticity, dotted: cortico-cortical only. Deep-sleep after training with 3 examples / digit class (A) Network’s structure. (B) Classification from most active neuron (C) Mean firing rate during classification and overall synaptic change. (D) cortico-cortical synaptic change: synapses encoding assemblies (blue), same class (yellow) different class (red). 100 configurations.
Acknowledgements
Workcofundedbythe European Next Generation EUgrants,ItaliangrantsCUP I53C22001400006 (FAIR PE0000013 PNRR) and CUP B51E22000150006 (EBRAINS-Italy IR00011 PNRR).APE parallel/distributed lab at INFN Roma,BRAINSTAIN.
LeonardoTonielliis a PhD studentofthe National PhD program in Artificial IntelligenceXLcycle Healthand life sciences, organized by Università Campus Bio-Medico di Roma.

References
1. https://doi.org/10.1016/j.tins.2012.11.006
2.https://doi.org/10.1016/j.neuron.2013.12.025
3.https://doi.org/10.1016/j.neuron.2016.03.036
4.https://doi.org/10.1038/s41598-019-45525-0
5. doi:10.1109/CVPR.2005.177

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P304: Impulsivity Enhances Random Exploration and Boosts Performance in Loss but not Gain Contexts
Tuesday July 8, 2025 17:00 - 19:00 CEST
P304 Impulsivity Enhances Random Exploration and Boosts Performance in Loss but not Gain Contexts

Lingyu Meng1, Alekhya Mandali1,2,Hazem Toutounji*1,2,3

1School of Psychology, University of Sheffield, Sheffield, UK
2The Neuroscience Institute, University of Sheffield, Sheffield, UK
3Insigneo Institute for in silico Medicine, University of Sheffield, Sheffield, UK


*Email: h.toutounji@sheffield.ac.uk

Introduction
People often encounter decisions that may lead to gains or losses. As they learn the value of available choices, whether positive in the case of gains and rewards, or negative in the case of losses, people need also to balance between gathering information (exploration) and capitalising on their current knowledge (exploitation). Exploration itself can either be random or directed towards reducing uncertainty[1]. While psychiatric traits like impulsivity are known to influence exploration[2], there is no account on how this influence relates to the learning context such as gain or loss. This study investigates how impulsivity modulates different exploration strategies and decision performance in a context-dependent manner.

Methods
Human participants (N = 115) completed a two-armed bandit task where in different rounds they can win or lose points. Each arm delivered or cost either a fixed or a variable (uncertain) number of points. Learning and exploration behaviour was modelled using reinforcement learning. Crucially, uncertainty in each trial was incorporated into the model using the Kalman filter for the learning process and a hybrid choice model with three components[1]: value-dependent random exploration, and uncertainty-dependent random and directed exploration. Impulsivity was measured using the UPPS-P Impulsive Behaviour Scale[3]. A general linear mixed model quantified the interaction between impulsivity, exploration strategies, and context.

Results
Participants engaged in significantly more value-dependent random exploration and less uncertainty-dependent random exploration in the loss context compared to the win context. However, impulsive individuals showed the opposite trend, relying significantly more on uncertainty-dependent random exploration in the loss context. Impulsivity was also positively linked to task performance in loss contexts, suggesting that impulsive individuals adaptively leveraged random exploration to manage uncertainty. In other words, impulsive individuals engaged in more uncertainty-dependent random exploration, especially when facing losses, and benefited from this strategy.

Discussion

Our findings highlight the adaptive role of impulsivity in uncertain environments, particularly when leading to losses. Impulsive individuals appear to be more sensitive to total uncertainty, effectively using random exploration to improve performance. These results contrast with prior studies that emphasise the maladaptive nature of impulsivity, suggesting instead its potential benefits in high-stakes loss contexts. Our findings contradict prospect theory[4], showing more risk aversion to losses than gains. Further, this win-loss asymmetry is amplified in impulsive individuals, highlighting the importance of taking individual traits into account when developing theories of human learning and decision making.



Acknowledgements
This work was funded by the University of Sheffield.
References
1. doi: 10.1016/j.cognition.2017.12.014
2. doi: 10.1038/s41467-022-31918-9
3. doi: 10.3389/fpsyt.2019.00139

4.doi: 10.2307/1914185
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P305: Developing a Digital Twin of the Drosophila Optical Lobe: A Large-Scale Autoencoder Trained on Natural Visual Inputs Using Complete Connectome Data
Tuesday July 8, 2025 17:00 - 19:00 CEST
P305 Developing a Digital Twin of the Drosophila Optical Lobe: A Large-Scale Autoencoder Trained on Natural Visual Inputs Using Complete Connectome Data

Keisuke Toyoda*1, Naoya Nishiura1, Masataka Watanabe1

1The University of Tokyo, Tokyo, Japan

*Email: toyoda-keisuke527@g.ecc.u-tokyo.ac.jp

Introduction

The optic lobe is the main visual system of Drosophila, involved in functions like motion detection [2]. Recent advances in connectome projects have provided near-complete synaptic maps [1,3,8], enabling detailed circuit analyses. A recent study trained a connectome based neural network to reproduce motion detection properties of neurons T4 and T5, assuming vector teaching signals like optical flow, which are absent in biological circuitry. In this study, we use the right optic lobe’s connectivity from FlyWire [5,8] to build a large-scale autoencoder, where the visual input itself serves as the teaching signal [6]. In doing so, we aim to develop a digital twin of the drosophila optical lobe under biologically plausible training conditions.

Methods
We derived a synaptic adjacency matrix from the entire right optic lobe, yielding about 45,000 nodes and over 4.5 million edges [5]. Photoreceptors (R1–R6) served as both input and output in an autoencoder that preserves feedforward and feedback connections [6]. We trained it with natural video stimuli, adjusting synaptic weights to minimize reconstruction error between initial and reconstructed signals. Each iteration also incorporated slight temporal offsets to assess predictive capacity. Neuronal activity was then analyzed by topological distance from the photoreceptors, allowing us to track signal propagation through deeper layers [2].
Results
After training, the autoencoder accurately reconstructed photoreceptor inputs, achieving low mean squared error across varied visual contexts. Neurons beyond superficial lamina layers showed moderate activity, implying that deeper circuits were engaged, though not intensely. Under prolonged stimulation, activation patterns stabilized, suggesting recurrent loops that dampen fluctuations. These results align with reports that feedback modulates photoreceptors to maintain sensitivity [6]. Performance analyses indicated that minor temporal offsets improved predictive accuracy, hinting that the network captures short-term correlations in visual input.
Discussion
Our findings show that a connectome-based autoencoder, using the entire right optic lobe, can reconstruct visual inputs while incorporating known feedback loops. By preserving anatomical wiring [5,8], the model reveals how structural constraints inform function. Compared to approaches that highlight local motion detection [4] or rely on supervised learning [3], our unsupervised method uncovers emergent coding without explicit tasks. Although deep-layer neurons were only moderately active, their engagement suggests hierarchical processing aids reconstruction [2]. Future studies could dissect subnetworks for contrast gain or motion detection to clarify how feedback refines perception [1,6].



Acknowledgements
This work has been supported by the Mohammed bin Salman Center for Future Science and Technology for Saudi-Japan Vision 2030 at The University of Tokyo (MbSC2030) and JSPS KAKENHI Grant Number 23K25257.
References
● https://doi.org/10.7554/eLife.57443
● https://doi.org/10.1146/annurev-neuro-080422-111929
● https://doi.org/10.1038/s41586-024-07939-3
● https://doi.org/10.1016/j.cub.2015.07.014
● https://doi.org/10.1038/s41592-021-01330-0
● https://doi.org/10.1371/journal.pbio.1002115
● https://doi.org/10.1007/s00359-019-01375-9
● https://doi.org/10.1038/s41586-024-07558-y


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P306: Computational investigation of wave propagation in a desynchronized network
Tuesday July 8, 2025 17:00 - 19:00 CEST
P306 Computational investigation of wave propagation in a desynchronized network

Lluc Tresserras Pujadas*1, Leonardo Dalla Porta1, Maria V. Sanchez-Vives1,2
1Systems Neuroscience, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
2ICREA, Passeig Lluís Companys, Barcelona, Spain


*Email: tresserrasi@recerca.clinic.cat

Introduction

The cerebral cortex exhibits a rich repertoire of spontaneous spatiotemporal patterns of activity that strongly depend on the dynamical regime of the brain. Specifically, its dynamics can range from highly synchronized states (e.g., slow wave sleep), characterized by the presence of slow oscillations (SO), to more asynchronous patterns (e.g., awake states). However, under certain specific conditions, slow waves can spontaneously emerge and propagate within awake cortical network such as in cases of sleep deprivation [1], lapses of attention [2], or brain lesions [3]. Although recent studies have described this phenomenon, the mechanisms facilitating slow wave percolation on desynchronized cortical areas remain poorly understood.


Methods
To investigate this question, we employed a biophysical realistic two-dimensional computational model simulating desynchronized activity characteristic of awake states [4]. By inducing slow oscillations in a localized cortical area, we investigated how slow waves percolate into neighboring awake regions. Specifically, we examined how changes in excitatory/inhibitory balance and structural connectivity of the network can enhance or reducethe percolation of slow waves into desynchronized areas. To quantify slow wave propagation in the desynchronized network, we analyzed evoked network activity using different percolation metrics, such as the range of activation and shared information across the network.

Results
Our results indicate that increasing the proportion of long-range postsynaptic connections in excitatory neurons enhances global synchronization, facilitating the propagation of SO activity into desynchronized regions. We also examined the impact of inhibition on slow wave propagation by modulating the excitatory/inhibitory balance in the SO activity region of the network. Reducing inhibition increasedcortical excitabilityand local synchronization within the SO region, thereby enhancing the spread of slow oscillations within the desynchronized network.

Discussion
In summary, we showed that increasing the proportion of long-range excitatory connections enhances global synchronization, while reducing inhibition promotes local synchronization and neuronal excitability, both facilitating the spread of slow oscillations into desynchronized areas. These findings are further supported with the use of different percolation metrics reinforcing the idea that structural and functional properties of the network play a crucial role in determining cortical vulnerability to slow wave percolation. Together, our results are a first step in mechanistically understanding the dynamical changes that occur in the lesioned brain and their underlying mechanisms, offering a path to the development of future therapeutic strategies for neurologic disorder.





Acknowledgements
Funded by PID2020-112947RB-I00 financed by MCIN/ AEI /10.13039/501100011033 and by European Union (ERC, NEMESIS, project number 101071900) to MVSV and PRE2021-101156 financed by the Spanish Ministry of Science and Innovation.
References
[1]Vyazovskiy, V. V., et al. (2011).Local sleep in awake rats.Nature,472, 443-447.
[2]Andrillon, T., et al. (2021). Predicting lapses of attention with sleep-like slow waves.Nat Commun,12, 3657.
[3]Massimini, M., et al. (2024). Sleep-like cortical dynamics during wakefulness and their network effects following brain injury.Nat Commun,15, 7207.
[4]Barbero-Castillo, A., et al.(2021). Impact of GABAAand GABABinhibition on cortical dynamics and perturbational complexity during synchronous and desynchronized states.J Neurosci,41, 5029-5044.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P307: A Computational Model to Study Effects of Hidden Hearing Loss in Noisy Environments
Tuesday July 8, 2025 17:00 - 19:00 CEST
P307 A Computational Model to Study Effects of Hidden Hearing Loss in Noisy Environments

Siddhant Tripathy*1, Maral Budak2, Ross Maddox3, Gabriel Corfas3, Michael T. Roberts3, Anahita H. Mehta3, Victoria Booth4, Michal Zochowski1,5

1Department of Physics, University of Michigan, Ann Arbor, USA
2Department of Microbiology and Immunology, University of Michigan, Ann Arbor, USA
3Kresge Hearing Research Institute, University of Michigan, Ann Arbor, USA
4Department of Mathematics, University of Michigan, Ann Arbor, USA
5Biophysics Program, University of Michigan, Ann Arbor, USA

*Email: tripps@umich.edu

Introduction

Hidden Hearing Loss (HHL) is an auditory neuropathy leading to reduced speech intelligibility in noisy environments despite normal audiometric thresholds. One of the leading hypotheses for such degraded performance is myelinopathy, a permanent disruption in the myelination patterns of type 1 Spiral Ganglion Neuron (SGN) fibers [1,2]. Previous studies on location discriminability in the Medial Superior Olive (MSO) cells in the left and right hemispheres as a function of the interaural time difference (ITD), have shown that myelinopathy leads to signatures of HHL [3]. However, the effects of noise on location discriminability is unknown.
Methods
To investigate these effects, we developed a physiologically based model that incorporates SGN fiber activity to sound stimuli processed through a peripheral auditory system model [4]. To simulate myelinopathy, we introduced random variations in the position of myelination heminodes, which generates phase shifts in the spike timing of affected fibers. To test the subsequent effects on sound localization, we constructed a network model that simulates the propagation of SGN responses to cochlear nuclei and the MSO populations. We varied the location of the sound impulse by introducing a phase shift in the input in one ear relative to the other, with background noise signals kept stationary.
Results
Upon adding noise to the sound stimuli, we find that spikes in a given SGN fiber's spike train are shifted inhomogeneously, leading to a reduction in phase locking of single fibers to sound. The effects of myelinopathy on population behavior are thus more pronounced in the presence of noise. Subsequently in the localization network, we find that the sensitivity to ITD is reduced in myelinopathy conditions, and that this effect is significantly exacerbated when we introduce noisy background stimuli, a signature of HHL.
Discussion
We find that noisy environments exacerbate HHL symptoms. This model may be useful in understanding the downstream impacts of SGN neuropathies.




Acknowledgements
This research was supported in part by National Institute of Health grant: NIH MH135565 (MZ and ST) and R01DC000188 (GC).R01DC000188
References
● https://doi.org/10.1038/ncomms14487
● Budak, M., Grosh, K., Sasmal, A., Corfas, G., Zochowski, M., and Booth, V. (2021). Contrasting mechanisms for hidden hearing loss: Synaptopathy vs myelin defects. PLoS Comput. Biol. 17:e1008499. doi: 10.1371/journal.pcbi.1008499
● Budak, M., Roberts, M. T., Grosh, K., Corfas, G., Booth, V. and Zochowski, M. (2022). Binaural Processing Deficits Due to Synaptopathy and Myelin Defects. Front. Neural Circuits 16:856926. doi: 10.3389/fncir.2022.856926
● https://doi.org/10.1121/1.1453451PMID: 12051437.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P308: Brain-Inspired Recurrent Neural Network Featuring Dendrites for Efficient and Accurate Learning in Classification Tasks
Tuesday July 8, 2025 17:00 - 19:00 CEST
P308 Brain-Inspired Recurrent Neural Network Featuring Dendrites for Efficient and Accurate Learning in Classification Tasks

Eirini Troullinou*1,2, Spyridon Chavlis1, Panayiota Poirazi1

1Institute of Molecular Biology and Biotechnology, Foundation for Research, and Technology-Hellas, Heraklion, Greece
2Institute of Computer Science, Foundation for Research, and Technology-Hellas, Heraklion, Greece

*Email: eirini_troullinou@imbb.forth.gr

Introduction

Artificial neural networks (ANNs) have achieved substantial advancements in addressing complex tasks across diverse domains, including image recognition and natural language processing. These networks rely on a large number of parameters to attain high performance; however, as the complexity of ANNs increases, the challenge of training them efficiently also escalates [1]. In contrast, the biological brain, which has served as a fundamental inspiration for ANN architectures [2], exhibits remarkable computational efficiency by processing vast amounts of information with minimal energy consumption [3]. Moreover, biological neural networks demonstrate robust generalization capabilities, often achieving effective learning with limited training samples, a phenomenon known as few-shot learning.

Methods
In an effort to develop a more biologically plausible computational model, we propose a sparse, brain-inspired recurrent neural network (RNN) that incorporates biologically motivated connectivity principles. This approach is driven by the computational advantages of dendritic processing [4], which have been extensively studied in biological neural networks. Specifically, our model enforces structured connectivity constraints that emulate the physical relationships between dendrites, neuronal somata, and inter-neuronal connections. These biologically inspired connectivity rules are implemented via structured binary masking, thereby regulating the network's architecture based on empirical neurophysiological observations.

Results
To assess the efficacy of the proposed model, we conducted a series of experiments on benchmark image and time-series datasets. The results indicate that our brain-inspired RNN attains the highest accuracy achieved by a conventional (vanilla) RNN while utilizing fewer trainable parameters. Furthermore, when the number of trainable parameters is increased, our model surpasses the peak performance of the vanilla RNN by a margin of 3–20%, depending on the dataset. In contrast, the conventional RNN exhibits overfitting tendencies, leading to significant performance degradation.

Discussion
In summary, we present a biologically inspired RNN architecture that incorporates dendritic processing and sparse connectivity constraints. Our findings demonstrate that the proposed model outperforms traditional RNNs in both image and time-series classification tasks. Additionally, the model achieves competitive performance with fewer parameters, highlighting the potential role of dendritic computations in machine learning. These results align with experimental evidence suggesting the critical contribution of dendrites to efficient neural processing, thereby offering a promising direction for future ANN development.



AcknowledgementsThis work was supported by the NIH (GA: 1R01MH124867-04), the TITAN ERA Chair project under Contract 101086741 within the Horizon Europe Framework Program of the European Commission, and the Stavros Niarchos Foundation and the Hellenic Foundation for Research and Innovation under the 5th Call of Science and Society "Action Always strive for excellence – Theodoros Papazoglou" (DENDROLEAP 28056).
References
[1] Abdolrasol, M. G, et al. (2021). Artificial neural networks based optimization techniques: A review. Electronics, 10(21), 2689.
[2] Sejnowski, T. J. (2020). The unreasonable effectiveness of deep learning in artificial intelligence. PNAS, 117(48), 30033-38.
[3] Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab, 21(10), 1133-1145.
[4] Poirazi, P., & Papoutsi, A. (2020). Illuminating dendritic function with computational models. Nat Rev Neurosci, 21(6), 303-21.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P309: Macaque retina simulator
Tuesday July 8, 2025 17:00 - 19:00 CEST
P309 Macaque retina simulator


Simo Vanni*1, Henri Hokkanen2

1Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland
2Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland

*Email:simo.vanni@helsinki.fi


Introduction


We have been building a phenomenological macaque retina simulator with the aim of providing biologically plausible spike trains for downstream visual cortex simulations. Containing a wide array of biologically relevant information is the key to having an accurate starting point for building the next step in the visual processing cascade.The primate retina dissects visual scenes into three major high-resolution retinocortical streams. The most numerous retinal ganglion cell (RGC) types, midget and parasol cells, are further divided into ON and OFF subtypes. These four RGC populations have well-known anatomical and physiological asymmetries, which are reflected in the spike trains received by downstream circuits. Computational models of the visual cortex, however, rarely take these asymmetries into account.


Methods

We collected published data on ganglion cell densities[1]and dendritic diameters[2, 3]as a function of eccentricity for parasol and midget ON & OFF types. Spatial receptive fields were modelled as a elliptical difference-of-Gaussians model or a spatially detailed variational autoencoder model, based on spatiotemporal receptive field data[4, 5]. The three temporal receptive field models include linear temporal filter, dynamic contrast gain control[6–8]and a subunit model accounting for both center subunit[9]and surround[10]nonlinearity and fast cone adaptation[11]. Finally, we included cone noise to all three temporal models, quantified by[12], to account for correlated background firing in distinct ganglion cell types[13].
Results

Figure 1 A and B show how synthetic receptive fields are arranged into a two-dimensional array. The temporal impulse response (C) for the dynamic gain control model has kernel dynamics varying with contrast. Parasol and midget unit responses for temporal frequency and contrast show expected behavior, with parasol sensitivity peaking at a higher temporal frequency and showing compressive non-linearity with increasing contrast (D, F). Dynamical temporal model responses to full-field luminance onset show expected onset and offset dynamics (F, G). The drifting sinusoidal grating at 4Hz evokes oscillatory response at the stimulus frequency (H).


Discussion

Our retina model can be adjusted for varying cone noise and unit gain (firing rate) levels and allows mp4 videos as stimulus input. The software is programmed in Python and supports GPU acceleration. Moreover, we have strived for modular code design to support future development.
Our model has multiple limitations. It is monocular and accounts for temporal hemifield only. It assumes stable luminance adaptation state and does not consider chromatic input or eye movements. Optical aberration is implemented with fixed spatial filter.
Despite these limitations, we believe it provides a physiologically meaningful basis for simulations of the primate visual cascade.





Figure 1. Fig 1. A) Synthetic parasol ON receptive fields (RFs). B) RF repulsion equalizes coverage. C) Linear fixed and nonlinear contrast gain control model temporal impulse responses. D, F) Parasol and midget unit responses for temporal frequency and contrast. E) Responses for varying contrasts. G) Responses for luminance onset and offset. H) Responses for drifting sinusoidal grating.
Acknowledgements

We thank Petri Ala-Laurila for insightful comment on model construction. This work was supported by Academy of Finland grant N:o 361816.


References

[1]https://doi.org/10.1038/341643a0
[2]https://doi.org/10.1016/0306-4522(84)90006-X
[3]https://doi.org/10.1002/cne.902890308
[4]https://doi.org/10.1080/713663221
[5]https://doi.org/10.1038/nature09424
[6]https://doi.org/10.1017/S0952523800008853
[7]https://doi.org/10.1017/S0952523899162151
[8]https://doi.org/10.1113/jphysiol.1987.sp016531
[9]https://doi.org/10.1016/j.neuron.2016.05.006
[10] https://doi.org/10.7554/eLife.38841
[11] https://doi.org/10.1523/JNEUROSCI.0793-21.2021
[12] https://doi.org/10.1038/nn.3534
[13] https://doi.org/10.1038/nn.2927



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P310: Feedback input to apical dendrites of L5 pyramidal cells leads to a shift towards a resonance state in V1 cortex
Tuesday July 8, 2025 17:00 - 19:00 CEST
P310 Feedback input to apical dendrites of L5 pyramidal cells leads to a shift towards a resonance state in V1 cortex

Francescangelo Vedele*1, Margaux Calice2, Simo Vanni1

1Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland
2Centre Giovanni Borelli - CNRS UMR 9010, Université Paris Cité, France

*Email: francescangelo.vedele@helsinki.fi
Introduction:To make sense of the abundance of visual information coming in from the outside world, cortical and subcortical structures operate on stored models of the environment that are constantly compared with new information[1]. The cortical structures for vision are tightly interconnected and rely on multiple subregions to capture different facets of information. The SMART model by Grossberg and Versace[2]aims to build a simulation framework to provide a circuit-level perspective on learning, expectation, and processing of visual information in the brain. While cellular details are well understood at the microscopic level, computations linking visual system states to higher-order processes are scarce.



Methods:The macaque was chosen as a biological model because of its close evolutionary relationship to humans[3]. Computer simulations of macaque cortical patches were implemented using CxSystem2[4,5], a cortical simulation framework based on Brian2[6]. The SMART model includes cells in V1 (layers L2/3, L4e, L5, and L6), dendrites of compartmental neurons reaching L1, and thalamic specific, nonspecific, and reticular nuclei. Simulations were run for a duration of 2 seconds. Spike times and cell membrane voltage were monitored. Power spectral density (PSD) spectra of membrane voltage were obtained using Welch’s method. A feedback current of 1.5x or 2.5x the rheobase was injected into the apical dendrite of L5 pyramidal cells (located in L1).

Results:The SMART model was first simulated with bottom-up sensory input and a weak feedback current. In this state, all layers output in short (~150ms) bursts followed by longer periods of oscillatory activity (~500ms). The PSD plots show a broad, low-frequency peak in the alpha/beta frequency bands (up to 30Hz) across layers. Upon injection of a stronger feedback current, the model shifts to a resonance mode characterized by higher firing rates and a broad PSD peak in the gamma range (20-70 Hz) across layers. Therefore, strong feedback input shifts the state of the system, from resting to a high-frequency resonance mode. This might be related to population synchrony, which may bind features in different parts of the visual field[7].

Discussion:The SMART model provides a flexible way to model cortical coordination and feedback. Our simulations show how a weak input from higher cortical areas leaves the system in a disengaged state, akin to a mismatch between expectation and reality. By injecting a strong current to mimic feedback from higher cortical areas, the simulated system enters a resonant state as in the biological brain, establishing a condition that supports learning and plasticity. While this model is informative when studying single-region cortical dynamics, we plan to integrate V2 and V5 with the current model of V1, aiming to simulate hierarchical cortical processing of visual information.






Acknowledgements
This work was supported by Academy of Finland project grant 361816.
References

[1]https://doi.org/10.1038/nrn2787
[2]https://doi.org/10.1016/j.brainres.2008.04.024
[3]https://doi.org/10.1093/cercor/bhz322
[4]https://doi.org/10.1162/neco_a_01120
[5]https://doi.org/10.1162/neco_a_01188
[6]https://doi.org/10.7554/eLife.47314
[7]https://doi.org/10.1038/338334a0
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P311: Adjustment of Vesicle Equation in the Modified Stochastic Synaptic Model to Correct Unexpected Behaviour in Frequency Response of Synaptic Efficacy
Tuesday July 8, 2025 17:00 - 19:00 CEST
P311 Adjustment of Vesicle Equation in the Modified Stochastic Synaptic Model to Correct Unexpected Behaviour in Frequency Response of Synaptic Efficacy

Ferney Beltran-Velandia*1,2,3, Nico Scherf2,3, Martin Bogdan1,2


1Neuromorphic Information Processing department, Leipzig University, Leipzig, Germany
2Center for Scalable Data Analytics and Artificial Intelligence ScaDS.AI, Dresden/Leipzig, Humboltstrasse 25, Leipzig, Germany
3Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, Leipzig, Germany

*Email:beltran@informatik.uni-leipzig.de

Introduction
Synaptic Dynamics (SD) describes the plasticity properties of synapses in the timescale of milliseconds. Among different SD models, The Modified Stochastic Synaptic Model (MSSM) is a biophysical one that can simulate the SD mechanisms of facilitation and depression [1]. Further analysis of the parameters found in [2] points at an unexpected behaviour in the frequency response of the MSSM. This behaviour is also studied in the time-domain, which points to an adjustment in the dynamics of the vesicle release. This correction leads to a version of the MSSM without the unexpected behaviour, balancing better the equations and allowing to find new sets of parameters to simulate examples of facilitation and depression.

Methods
The MSSM represents the dynamics of synapses by modelling the dynamics of Calcium, Vesicles release, Probability of release, Neurotransmitters buffering and postsynaptic contribution with differential equations and 10 parameters. In previous work [2], a pipeline was used to tune the parameters of the MSSM when simulating two types of synapses: pyramidal to interneuron (facilitation) and Calyx of Held (depression). The parameters are analysed using the frequency response of the synaptic efficacy, ranging from 1-100Hz [3]. The unexpected behaviour is defined as the frequency from where a discontinuity appears. Further analysis in the time-domain allows to propose the adjusted MSSM, which corrects this behaviour and balances its equations.

Results
Applying the frequency response analysis to the parameters for the studied SD mechanisms shows that some responses exhibit the unexpected behaviour (Fig. 1a-b). This behaviour is associated in the time-domain with the increment of neurotransmitters released even though the number of vesicles is in its steady-state (Fig. 1c). An adjustment on the equation of Vesicles release corrects this behaviour by making the input contribution dependent on the current number of vesicles release (Fig. 1d). To validate our approach, the pipeline is run for the adjusted MSSM, finding 6000 new sets of parameters for both SD mechanisms. The frequency response for the new parameters is depicted in Fig. 1e-f, showing the expected behaviour.

Discussion
The adjustment of the MSSM not only corrects the unexpected behaviour in the frequency- and time-domains but also balances the equation of Vesicle release: In the original model, the probability of release had the same units as the vesicles. With the adjustment, the probability of release recovers its dimensionless nature. The new distributions of parameters show that some parameters have more influence to distinguish between facilitation and depression, especially the ones associated to Probability of release and Neurotransmitters buffering. Finally, this work represents a step forward to the integration of the MSSM into Spiking Neural Networks, enhancing their computationalcapabilitieswith the properties of Synaptic Dynamics.





Figure 1. Figure 1. Unexpected behaviour of the MSSM: a-b) frequency responses with unexpected behaviour. In red, an example of the discontinuity of the efficacy. c) Time response: N(t) increase even if V(t) is in steady-state, causing the unexpected behaviour. d) Time response of the adjusted MSSM showing the correction. e-f) Frequency response of new parameters with the unexpected behaviour corrected.
Acknowledgements
I want to thank the team of the Neuromorphic Information Processing Group, specially Patrick Schoefer and Dominik Krenzer for all the fruitful discussions.This work was partially funded by the German Federal Ministry of Education and Research (BMBF) within the project (ScaDS.AI) Dresden/Leipzig (BMBF grant 01IS18026B).
References
[1] El-Laithy, K. (2012). Towards a brain-inspired information processing system: Modelling and analysis of synaptic dynamics. LAP Lambert Academic Publishing.
[2] Beltran, F., Scherf, N., & Bogdan, M. (2025). A pipeline based on differential evolution for tuning parameters of synaptic dynamics models. (To appear in Proceedings of the 33rd ESANN)
[3] Markram, H., Wang, Y., & Tsodyks, M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences, 95 (9), 5323-5328. doi: 10.1073/pnas.95.9.5323
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P312: Modeling Effects of Norepinephrine on Respiratory Neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P312 Modeling Effects of Norepinephrine on Respiratory Neurons

Sreshta Venkatakrishnan*1, Andrew Kieran Tryba2, Alfredo J. Garcia, 3rd3, and Yangyang Wang1

1Department of Mathematics, Brandeis University, Waltham, MA, USA
2Department of Pediatrics, Section of Neurology, The University of Chicago, Chicago, IL, USA
3Institute for Integrative Physiology, The University of Chicago, Chicago, IL, USA

*Email: sreshtav@brandeis.edu


Introduction
The preBötzinger complex (pBC) within the mammalian brainstem,comprised ofintrinsically bursting and spiking neurons,generates the neural rhythm that drives the inspiratory phase of respiration. Norepinephrine (NE), a neuromodulator, differentially modulates synaptically isolated pBC neurons [1]. In cadmium (Cd)-insensitive N-bursting neurons, NE stimulates burst frequency without affecting burst duration. In Cd-sensitive C-bursting neurons, NE increases duration while minimally affecting frequency. NE also induces conditional bursts in tonic spiking neurons, while silent neurons remain inactivein the presenceof NE. In this work, we propose a novel mechanism to simulate the effects of NE in single pBC neurons.

Methods
The pBC neuron model we consider is a single-compartment dynamical system with Hodgkin-Huxley-style conductances, incorporating membrane potential and calcium dynamics, adapted from previous works [2,3,4]. Of particular interest to us amongst the ionic currents incorporated in this model are two candidate burst-generating currents: Cd-insensitive persistent sodium current (INaP) and Cd-sensitive calcium-activated nonspecific cationic current (ICAN). Building on previous efforts for modeling NE via modulating ICAN[2,3] and from experimental evidence in [5], we propose that NE application in the model also leads to an increase in the flux of [Ca2+] between the cytosol and the ER, modeled via inositol-triphosphate, IP3.
Results
The most important finding of this study is the identification of potential mechanisms underlying the NE-mediated induction of tonic spiking neurons to CAN-dependent bursting. Our model predicts that this conditional bursting requires an increase in both IP3and ICAN. This mechanism also induces an increase in N-burst frequency and C-burst duration, while N-burst duration remains unaltered. While modulatingICANincreases C-burst frequency in our model, the opposing effect brought bymodulating IP3effectivelycounters this increase and maintains frequency. Furthermore, we also identify discrete parameter regimes where silent neurons continue to remain inactive in NE. These results are consistent with [1].
Discussion
Conditional bursting has been previously described in rhythmic networks; however, the underlying mechanisms are often unknown. Our model predicts a new mechanism involving NE-signaling, elevating both IP3and ICANin a subset of pBC neurons. These predictions need to be experimentally tested by blocking either IP3or ICAN, and testing whether subsequent NE modulation can no longer recruit this subset of pBC neurons to burst. Moreover, while our model predictions for bursting neurons mostly agree with the experiments in [1], we also notice some discrepancies with respect to burst frequency and duration. Further investigation is required to analyze and understand these disparities.






Acknowledgements
This work has been supported byNIH R01DA057767:CRCNS: Evidence-based modeling of neuromodulatory action on network properties,granted to Yangyang Wang (PI)andAlfredo Garcia at UChicago.
References
[1]https://doi.org/10.1152/jn.01308.2005
[2]https://doi.org/10.1007/s10827-010-0274-z
[3]https://doi.org/10.1007/s10827-012-0425-5
[4]https://doi.org/10.1063/1.5138993
[5]https://doi.org/10.1152/ajpendo.1985.248.6.E633
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P313: Individual differences in neural representations of face dimensions: Insights from Super-Recognisers
Tuesday July 8, 2025 17:00 - 19:00 CEST
P313 Individual differences in neural representations of face dimensions: Insights from Super-Recognisers

Martina Ventura*1, Tijl Grootswagers1,3, Manuel Varlet1,2, David White2, James D. Dunn2, Genevieve L. Quek1

1The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
2School of Psychology, Western Sydney University, Sydney, Australia
3School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney Australia
4School of Psychology, The University of New South Wales, Sydney, Australia

*E-mail:martina.ventura@westernsydney.edu.au

Introduction
Face processing is crucial for social interaction, with faces conveying information about identity, emotions, sex, age, and intentions [1]. Recent research has revealed significant individual differences in face recognition ability, with some people displaying exceptional face recognition skills – defined as Super-Recognisers [2,3].However, the brain mechanisms underpinning their superior ability remain unknown, including whether their exceptional face recognition is restricted to identity or also extends to other face dimensions such as sex and age.

Methods
Here we use Electroencephalography (EEG) to investigate the neural processes underlying face dimensions representations in Super-Recognisers (N = 12) and Typical-Recognisers (N = 17).We recorded 64 channel EEG while participants saw 400 naturalistic face images (40 distinct identities stratified by sex, age, and ethnicity) in a rapid 5 Hz randomized stream. We used Multi-Variate Pattern Analysis to measure the strength and temporal dynamics of neural encoding of different facial dimensions in both Super-and Typical-Recognisers.


Results
Our results showed that face identity decoding was stronger forSuper-RecognisersthanTypical-Recognisersstarting around 300ms - a time window typically associated with identity-related processing. In contrast, no differences were found between groups’ decoding profiles for face age, face sex, or face ethnicity.
Discussion
These results suggest that theSuper-Recognisersadvantage may be limited to face identity processing, rather than reflecting a general advantage in face dimension processing. These findings provide a crucial first step toward understanding the neural mechanisms underlying their exceptional face recognition ability.





Acknowledgements
We sincerely appreciate the time and effort of all the participants in this study. Your willingness to take part was essential in making this research possible. Thank you for your valuable contribution.
References
1.Tsao, D. Y., & Livingstone, M. S. (2008). Mechanisms of face perception.Annual review of neuroscience,31, 411–437.https://doi.org/10.1146/annurev.neuro.30.051606.094238

2.Russell, R., Duchaine, B., & Nakayama, K. (2009). Super-recognizers: people with extraordinary face recognition ability.Psychonomic bulletin & review,16(2), 252–257.https://doi.org/10.3758/PBR.16.2.252

3.Dunn, J. D., Summersby, S., Towler, A., Davis, J. P., & White, D. (2020). UNSW Face Test: A screening tool for super-recognizers.PloS one,15(11), e0241747. https://doi.org/10.1371/journal.pone.0241747
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P314: Neural compensation drives functional resilience in a cerebellar model of schizophrenia
Tuesday July 8, 2025 17:00 - 19:00 CEST
P314 Neural compensation drives functional resilience in a cerebellar model of schizophrenia

Alberto A. Vergani*1, Pawan Faris1, Claudia Casellato1, Marialaura De Grazia1and Egidio U. D'Angelo1,2

1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
2Digital Neuroscience Center, IRCCS Mondino Foundation, Pavia, Italy

*Email: albertoarturo.vergani@unipv.it

Introduction

Schizophrenia (SZ) affects ~1% of the global population (~24 million) [1]. While cortical and subcortical alterations are well-documented, the cerebellum’s role in cognitive dysfunction (CAS) remains underexplored [2]. SZ-related cerebellar degeneration involves neuron loss, reduced dendritic complexity, and weakened connectivity [3], often countered by compensatory hyperconnectivity [4-8]. Following the 'cognitive dysmetria' hypothesis [9], this study quantifies structural and functional changes in a cerebellar network model under atrophy and compensatory synaptogenesis [10-11].


Methods
Using Brain Scaffold Builder (BSB, [12]), we implemented an atrophy algorithm in a mouse cerebellum model, modulating cellular and network changes via the Atrophy Factor (AF, 0–60%).By preserving electrical cell properties, it simulated schizophrenic neurodegeneration while ensuring anatomical plausibility. Atrophy induced morphological shrinkage, dendritic pruning, radius reduction, neural density loss, and cortical thinning. Changes were quantified via apoptosis, dendritic complexity index (DCI, [13]), and connectivity metrics. Compensation via synaptogenesis increased synapse count with AF. The altered connectome (~30K neurons, EGLIF, [14]) was simulated in NEST [15] under baseline conditions (4 Hz mossy fiber stimulation) to assess firing rate changes.

Results
Atrophy altered network structure, reducing neurons, dendritic complexity, connectivity, and synapse count. Compensation offset this by increasing synapses in survived neuron pairs. Functional changes emerged from structural alterations, with excitability rising, reversing at ~10% AF, and zero-crossing at ~25% AF. Granule and Golgi cells showed opposite trends, while Purkinje, stellate, and basket cells were similar in firing change. DCN-I neurons gradually reduced activity, with compensation lightly delaying decline. DCN-P exhibited the highest resilience until ~25% AF, where compensation collapsed, triggering a firing surge disrupting output to telencephalon.

Discussion
This study examined cerebellar network degeneration while preserving electrical properties, highlighting structural changes, synaptic reorganization, and atrophy-related firing dynamics.Synaptic compensation mitigates pathology-driven neuronal damage, with a transition from hyper- to hypo-excitability, particularly in DCN-P, resembling Stern’s inflection point in neurodegenerative resilience [16]. Future work will explore atrophy-compensation effects on stimulus decoding and learning (eye blink conditioning, [17]), integrate with The Virtual Brain [18], compare with MEA recordings [19], and test therapeutic strategies like TMS and pharmacological interventions to enhance cognitive reserve.





Acknowledgements
Work supported by #NEXTGENERATIONEU (NGEU) and funded by MUR, National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) – A Multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022). The VBT Project has received funding from the European Union's Research and Innovation Program Horizon Europe under grant agreement No 101137289.
References
1 10.1001/jamapsychiatry.2019.3360
2 10.3389/fncel.2024.1386583
3 10.1016/j.biopsych.2008.01.003
4 10.1093/schbul/sbac120
5 10.1038/s41398-023-02512-4
6 10.1016/j.pscychresns.2018.03.010
7 10.1016/j.schres.2022.12.041
8 10.1038/s41386-018-0059-z
9 10.1093/oxfordjournals.schbul.a033321
10 10.1007/s12311-019-01091-9
11 10.1523/JNEUROSCI.0379-23.2023
12 10.1038/s42003-022-04213-y
13 10.1038/s42003-023-05689-y
14 10.3389/fninf.2018.00088
15 10.5281/ZENODO.4018718
16 10.1016/j.neurobiolaging.2022.10.015
17 10.3389/fpsyt.2015.00146
18 10.1093/nsr/nwae079

19 10.1371/journal.pcbi.1004584
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P315: Towards brain scale simulations using NEST GPU
Tuesday July 8, 2025 17:00 - 19:00 CEST
P315 Towards brain scale simulations using NEST GPU

José Villamar*1,2, Gianmarco Tiddia3, Luca Sergi3,4, Pooja Babu1,5, Luca Pontisso6, Francesco Simula6, Alessandro Lonardo6, Elena Pastorelli6, Pier Stanislao Paolucci6, Bruno Golosio3,4, Johanna Senk1,7

1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, Monserrato, Italy
4Dipartimento di Fisica, Università di Cagliari, Monserrato, Italy
5Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Jülich Research Centre, Jülich, Germany
6Istituto Nazionale di Fisica Nucleare, Sezione di Roma, Roma, Italy
7Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
*Email:j.villamar@fz-juelich.de


Introduction

Efficient simulation of large-scale spiking neuronal networks is important for neuroscientific research, and both the simulation speed and the time it takes to instantiate the network in computer memory are key factors. NEST GPU is a GPU-based simulator under the NEST Initiative written in CUDA-C++ that demonstrates high simulation speeds with models of various network sizes on single-GPU and multi-GPU systems [1,2,3]. On the path toward models of the whole brain, neuroscientists show an increasing interest in studying networks that are larger by several orders of magnitude. Here, we show the performance of our simulation technology with a scalable network model across multiple network sizes approaching human cortex magnitudes.
Methods
For this, we propose a novel method to efficiently instantiate large networks on multiple GPUs in parallel. Our approach relies on the deterministic initial state of pseudo-random number generators (PRNGs). While requiring synchronization of network construction directives between MPI processes and a small memory overhead, this approach enables dynamical neuron creation and connection at runtime. The method is evaluated through a two-population recurrently connected network model designed for benchmarking an arbitrary number of GPUs while maintaining first-order network statistics across scales.
Results
The benchmarking model was tested during an exclusive reservation of the LEONARDO Booster cluster. While keeping constant the number of neurons and incoming synapses to each neuron per GPU, we performed several simulation runs exploiting in parallel from 400 to 12,000 (full system) GPUs. Each GPU device contained approximately 281 thousand neurons and 3.1 billion synapses. Our results show network construction times of less than a second using the full system and stable dynamics across scales. At full system scale, the network model was composed of approximately 3.37 billion neurons and 37.96 trillion synapses (~25% human cortex).

Discussion
To conclude, our novel approach enabled network model instantiation of magnitudes nearing human cortex scale while keeping fast construction times, on average of 0.5s across trials. The stability of dynamics and performance across scales obtained in our model is a proof of feasibility paving the way for biologically more plausible and detailed brain scale models.




Acknowledgements
ISCRA for awarding access to the LEONARDO supercomputer (EuroHPC Joint Undertaking) via theBRAINSTAIN - INFN Scientific Committee 5 project, hosted by CINECA (Italy); HiRSE_PS, Helmholtz Platform for Research Software Engineering - Preparatory Study (2022-01-01 - 2023-12-31), the Horizon Europe Grant 101147319, Joint lab SMHB; FAIR CUPI53C22001400006 Italian PNRR grant.
References

https://doi.org/10.3389/fncom.2021.627620

https://doi.org/10.3389/fninf.2022.883333

https://doi.org/10.3390/app13179598


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P316: Characterization of thalamic stimulus on a cortical column
Tuesday July 8, 2025 17:00 - 19:00 CEST
P316 Characterization of thalamic stimulus on a cortical column

Pablo Vizcaíno-García*1,2,3, Fernando Maestú1,3, Alireza Valizadeh1 Gianluca Susi1,2,3

1Zapata-Briceño Institute for Human Intelligence, Madrid, Spain.
2Department of Structure of Matter, Thermal Physics and Electronics, School of Physics, Complutense University of Madrid, Madrid, Spain
3Center for Cognitive and Computational Neuroscience, Complutense University of Madrid, Madrid, Spain

*Email: pabvizca@ucm.es

Introduction

Cortical columns are fundamental organizational units in cerebral cortical processing and development [1]. They regularly receive external stimuli, coming from both higher-order areas and the thalamus. Different hypotheses have been proposed regarding the function of the thalamus: it is considered to act as a generator of the alpha rhythm [2]; and it is also said to play a potential role in sensory gating. In this work we focus on exploring the latter process. We investigate how stimuli propagate from one layer of the cortical column to the entire unit, examining how alpha and gamma rhythms may be disrupted or enhanced across the different layers. We build upon the design of a cortical column by Potjans & Diesmann [3].

Methods
We implemented an interconnected set of full-spiking cortical columns, each column encompassing 80000 neurons and 0.3 billion synapses. The connections have been derived from experimental data, utilising diffusion magnetic resonance imaging data. The column’s background stimulus was modified in order to start in a high-coherence state, which more easily allows the characterisation of the response. Said characterisation has been done by injecting a pulse packet into L4E, and obtaining order PRC (Phase Response Curves) which characterise the delays produced by the same stimulus if injected into different phases of the activity [4]. A 1ms wide stimulus was injected into different phases of the gamma period of L4E.
Results
The phases were identified after a gamma band filter (45-80Hz) and applying the Hilbert transform to the cortical column in the absence of a stimulus. The resulting PRC curves can be observed in Fig. 1. This figure presents both the raster plot of a stimulated cortical column and the resulting PRC curve. Each dot of the figure represents one spike of one neuron, in the appropriate time, and the superimposed line is the gamma-filtered population activity. From this figure we can observe there is a sudden halt in the gamma band after stimulation in L4E. The PRC curve was computed as an ensemble average of 10 trials. From this figure we highlight L23E as the only population which is consistently delayed by the input stimulus.
Discussion
From these early results, two main facts have become evident. First, the burst suppression phenomena emerges as a response to stimulating L4E, a layer that mostly receives its inputs from the thalamic nuclei. Second, the PRC of L23E shows the biggest time lag. Another avenue to explore is the variation of these curves in a less coherent network state. This work will seek to elucidate the mechanisms behind both of these phenomena, applying and comparing the results with the well-studied thalamocortical feedback loop. The investigation will contribute to a better understanding of the cortical column dynamics, but additionally will help clarify the effects of the communication between the thalamus and the cortical cortex.




Figure 1. Left: Raster plot of cortical column activity . Each dot represents a spike, and each colour a neuronal population. Imposed over the plot is the activity of each population, computed using a gaussian window over spike times, and normalised for the plot. Right: Phase response curve. Measures the time where each population reaches the first maximum in activity after stimulation injection into L4E.
Acknowledgements
This work was supported by Zapata-Briceño Intstitute of Science.
References
1.https://doi.org/10.1016/B978-0-12-814411-4.00005-6
2. hhtps://doi.org/10.34734/FZJ-2023-02822
3.https://doi.org/10.1093/cercor/bhs358
4.https://doi.org/10.3389/fninf.2010.00006
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P317: Critical dynamics improve performance in deep learning
Tuesday July 8, 2025 17:00 - 19:00 CEST
P317 Critical dynamics improve performance in deep learning


Simon Vock*1,2,3,4,5, Christian Meisel1,2,4,5,6

1Computational Neurology Lab, Department of Neurology, Charité – Universitätsmedizin, Berlin, Germany
2Berlin Institute of Health, Berlin, Germany
3Faculty of Life Sciences, Humboldt University Berlin, Germany
4Bernstein Center for Computational Neuroscience, Berlin, Germany
5NeuroCure Cluster of Excellence, Charité – Universitätsmedizin, Berlin, Germany
6Center for Stroke Research, Berlin, Germany

*Email: simon.vock@charite.de
Introduction

Deep neural networks (DNNs) have revolutionized AI, yet their vast parameter space makes training difficult, often leading to inefficiencies or failure. Their optimization remains largely heuristic, relying on trial-and-error design [1,2]. In biological networks, recent evidence suggests that critical phase transitions - balancing signal propagation to avoid die-out or runaway excitation - are key for effective learning [3,4]. Inspired by this, we analyze 80 modern DNNs and uncover a fundamental link between performance and criticality, unifying diverse architectures under a single theoretical perspective. Building on this, we propose a novel training approach that guides DNNs toward criticality, enhancing performance on multiple datasets.
Methods
We characterize criticality in DNNs using three key metrics: A maximum dynamic range Δ [5], a branching parameter σ=1 [6], and the largest Lyapunov exponent λ₀=0 [7]. Our statistical analysis employs multiple tests including Spearman's rank, Wilcoxon signed-rank, Mann-Whitney U, and linear mixed-effects models. We investigate 80 highly optimized DNNs from TorchVision pre-trained on the ImageNet-1k dataset [8]. We use the Modified National Institute of Standards and Technology (MNIST) dataset, a standard benchmark for computer vision. Building on our findings, we develop a novel training objective that specifically drives models toward criticality during the training process.
Results
We derive a set of measures quantifying the distance to criticality on DNNs and analyze 80 pre-trained DNNs from Torchvision (ImageNet-1k). We found that over the last decade, as test accuracies increased, networks became significantly more critical. Our analysis shows that test accuracies are highly correlated with criticality and model size. A linear mixed-effects model shows that distance to criticality and model size explain 60% of the variance in accuracy (R²). A novel training objective that penalizes distance to criticality improves MNIST accuracy by up to 0.8% compared to highly optimized DNNs. In a continual learning setting using ImageNet, this approach enhances neuronal plasticity and outperforms established training techniques.
Discussion
Analyzing 80 diverse DNNs developed over the last decade, we uncover two key ingredients for high-performance deep learning: Network size and critical neuron dynamics. We find that modern deep learning techniques implicitly enhance criticality, driving recent advancements in the field. We show how improved DNN architectures and training approaches promote criticality, and further introduce a novel training method that enforces criticality during training. This significantly boosts accuracy on MNIST. Additionally, our method enhances the network’s plasticity, improving adaptability to new information in continual learning. We expect these findings to generalize to other models and tasks, offering a path toward more efficient AI.



Acknowledgements

References
1. Glorot, X., & Bengio, Y. (n.d.). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), 9.
2.https://doi.org/10.1038/nature14539
3.https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003
4.https://doi.org/10.1016/0167-2789(90)90064-V
5.https://doi.org/10.1523/JNEUROSCI.3864-09.2009
6.https://doi.org/10.1103/PhysRevLett.94.058101
7.https://doi.org/10.1103/PhysRevLett.132.057301
8.https://doi.org/10.1109/CVPR.2009.5206848
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P318: Algorithmic solutions for spike-timing dependent plasticity in large-scale network simulations with long axonal delays
Tuesday July 8, 2025 17:00 - 19:00 CEST
P318 Algorithmic solutions for spike-timing dependent plasticity in large-scale network simulations with long axonal delays

Jan N. Vogelsang*1,2, Abigail Morrison*2,3, Susanne Kunkel1

1 Neuromorphic Software Ecosystems (PGI-15), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany

*Email: j.vogelsang@fz-juelich.de
Introduction

The precise timing of neuronal communication is a cornerstone in understanding learning and synaptic plasticity. Spike-timing dependent plasticity (STPD) models in particular rely on the precise temporal difference between pre- and post-synaptic spikes to adjust synaptic strength, where both the diverse axonal propagation delays and dendritic backpropagation delays play a crucial role in determining the precise timing. However, neural simulators, such as NEST, have traditionally represented transmission delays between neurons as a single aggregate delay value because of algorithmic challenges. We present two simulation frameworks addressing these challenges and validate across a set of small- to large-scale benchmarks.

Methods
The NEST simulator reference implementation currently treats the entire delay as dendritic, which allows performing synaptic strength adjustments immediately after the occurrence of a pre-synaptic spike, avoiding costly buffering of spikes. This is an acceptable approximation for small networks but leads to inaccuracies when modeling long-range connections. In this framework, introducing axonal delays causes causality issues. At the time a pre-synaptic spike occurs, post-synaptic spikes only occurring in future time steps might reach the synapse before such spike due to predominant axonal delays. In order to mitigate this issue, one must either correct the weight on later occurrence of such post-synaptic spikes or postpone the STDP update.
Results
Both approaches were implemented and rigorously benchmarked in terms of runtime efficiency and memory footprint for varying synaptic delays and delay partitions. Correcting faulty synaptic updates achieves exceptional performance for fractions of axonal delay equal to or lower than the corresponding dendritic one. Only in the case of predominant and long axonal delays it starts to be outperformed by the alternative approach, which however required fundamental changes to the simulation framework to enable efficient buffering of individual spikes at the synapse level. However, benchmarks show that the approach induces a negative impact on performance for simulations not involving STDP dynamics, unlike the correction-based approach.
Discussion
Although different axonal and dendritic contributions are known to bias the synaptic drift towards either systematic potentiation or depression, there is a lack of simulation studies investigating the effects on network dynamics and learning in large neuronal systems. The ability to differentiate between axonal and dendritic delays represents a significant advance in neural simulation technology, as it addresses a long-standing limitation in spike-timing dependent plasticity modeling in large-scale, distributed simulations and enables future research in learning and plasticity, in particular, investigations of brain-scale models with STDP faithfully representing heterogeneous long axonal delays between areas.




Acknowledgements
I want to thank Dennis Terhorst and Anno Kurth for assistance in benchmarking and running all the required jobs on the HPC systems.
References
-
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P319: Interactions between functional microcircuits involving three inhibitory interneuron subtypes for the surround modulation in V1
Tuesday July 8, 2025 17:00 - 19:00 CEST
P319 Interactions between functional microcircuits involving three inhibitory interneuron subtypes for the surround modulation in V1

Nobuhiko Wagatsuma*1, Tomoki Kurikawa2, Sou Nobukawa3,4

1Faculty of Science, Toho University, Funabashi, Chiba, Japan
2Future University Hakodate, Hakodate, Hokkaido, Japan
3Department of Computer Science, Narashino, Chiba Institute of Technology, Chiba, Japan
4Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, National Center of Neurology and Psychiatry, Kodaira, Tokyo, Japan

*Email: nwagatsuma@is.sci.toho-u.ac.jp

Introduction
A functional microcircuit of V1 for interpreting the external world resides in layers 2/3 and consists of excitatory pyramidal (Pyr) neurons and three inhibitory interneuron subtypes: parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal polypeptide (VIP). Recent physiological and computational studies suggest a structured organization of this microcircuit and distinct roles of inhibitory interneuron subtypes in modulating neural activity for visual perception [1,2]. Interactions between these microcircuits across receptive fields are crucial for integrating larger visual regions and forming perception, yet the precise structures and interneuron subtypes mediating these interactions remain unclear.

Methods
We developed a computational microcircuit model of the functional unit with biologically plausible visual cortical layers 2/3 that combined excitatory Pyr neurons and three inhibitory interneuron subtypes and explored the role of specific inhibitory interneuron subtype for mediating the interactions between these two microcircuits via the lateral inhibition across receptive fields (Fig.(A)). We assumed that the receptive fields of these units, which share common orientation selectivity, are spatially adjacent in the visual field. In this study, two functional microcircuits interacted each other via the lateral inhibition from excitatory Pyr neurons in one unit to PV or SOM inhibitory interneurons in the other.
Results
We performed simulations of the model with inputs mimicking small and large visual stimuli used in the physiological experiment [3]. We assumed that the small stimulus was confined to the receptive field of a single unit, whereas the large stimulus extended across the receptive fields of two microcircuits. Model simulations with the large visual stimulus implied that the lateral inhibition from Pyr neurons in one microcircuit to SOM interneurons in the other preferentially induced neuronal firing at beta (13-30 Hz) frequency, in agreement with physiological responses for the surround suppression in V1 [3]. By contrast, the model with the lateral inhibition mediated by PV interneurons distinct modulation patterns from physiological results.
Discussion
Our model reproduced characteristic neuronal activities in V1 induced by the surround modulation when the lateral inhibition across the receptive fields was mediated by SOM interneurons. Our results of model simulations suggested the specific role of SOM interneurons in the long-range lateral interactions across receptive fields in V1, which might contribute to the generation of surround modulation.



Figure 1. (A) Proposed microcircuit model. These two microcircuits interacted each other via lateral connections from Pyr neurons in one unit to PV or SOM interneurons in the other. (B) Simulation results. Black and blue lines indicated the oscillatory responses of the model with the lateral inhibition mediated by SOM and PV interneurons, respectively. The red line was those with the small stimulus.
Acknowledgements
This work was partly supported by the Japanese Society for the Promotion of Science (JSPS) (KAKENHI grants 22K12138, 22K12183, 23H03697, and 23K06394) and a grant of the Research Initiative Program of Toho University (TUGRIP).
References
1. https://doi.org/10.1093/cercor/bhac355
2. https://doi.org/10.1016/j.celrep.2018.10.029.
3. https://doi.org/10.1038/nn.4562.


Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P320: Updating of spatial memories in a systems-model of vector trace cells in the subiculum
Tuesday July 8, 2025 17:00 - 19:00 CEST
P320 Updating of spatial memories in a systems-model of vector trace cells in the subiculum

Fei Wang*1, Andrej Bicanski1

1Department ofPsychology,Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany


*Email:wangf@cbs.mpg.de


Introduction
The Subiculum (Sub) is known as the output layer of the hippocampal formation, and contains boundary vector cells (BVCs), firing for boundaries at specific allocentric directions and distances1, 2. More recently it has been shown that Sub vector cells can exhibits traces that persists for hours after boundary/object removal1 (Fig. 1a).Prior models suggest that such traces can be evoked by place cells (PCs), which index boundary/object presence at encoding2. Vector trace cells (VTCs) mainly occur in the distal Sub (dSub). However, an account of proximo-distal differences remains absent. Here we propose that vector trace cell coding in Sub provides a mismatch signal to update spatial memory representations.

Methods
In our model (Fig. 1b) dSub neurons receive feedforward input from either direct sensory information (BVCs in pSub) or mnemonic information (PCs in CA1). Mismatch between these inputs updates CA1-dSub synapses, with different dSub units having varying updating rates. Following the hypothesized CA1–Sub proximal–distal pathway3, which is implicated in spatial memory specialization, we show how inserted cues affect distal and proximal CA1 (dCA1, pCA1) and their corresponding dSub units. In this model, space-related pCA1 PCs transfer mnemonic information to dSub, while object-related dCA1 PCs exhibit place field drift toward the inserted cue, influencing the probability of synaptic updates between pCA1 and dSub units.
Results
We find that our mismatch-dependent learning model accounts for known VTC properties1, including: (i) the distribution of VTCs along the proximodistal axis, (ii) the percentage of VTCs across different cue types, and (iii) hours-long persistence of vector trace. (iv) By enriching CA1 representations, our model further explains additional empirical findings, including object-centered population coding in CA13. (v) VTCs have longer tuning distances after cue removal.
Discussion
Our model suggests that mismatch detection for updating of associative memory suggests mechanistic explanations for findings in the CA1-Sub pathway, and predicts a function for the Sub in coordinating spatial encoding and memory retrieval. Additionally, it describes the distinctive neural coding for novel objects and familiar contexts and their impacts on memory retrieval. Our work constitutes the first dedicated circuit-level model of computation within the Sub and provides a potential framework to extend the standard model of hippocampal function with a Sub component.



Figure 1. Fig. 1 (a) Experimental Procedure. Rats foraged for food while Sub neurons were recorded. Heatmaps show firing rates as a function of the rat's position (adapted from Poulter et al., 2021). (b) Our model has a perceptual pathway (pSub-dSub) and a memory pathway (CA1-dSub). Arrow widths represent connection strength. dSub units update CA1-dSub weights at varying rates, shown by different colors.
Acknowledgements
AB and FW acknowledge funding from the Max-Planck Society. Additionally, we thank Colin Lever at Durham University for insightful discussions, valuable advice, and access to preliminary data.
References
● Poulter, S., Lee, S. A., Dachtler, J., Wills, T. J., & Lever, C. (2021).Vector trace cells in the subiculum of the hippocampal formation.Nature neuroscience,24(2), 266-275.
● Bicanski, A., & Burgess, N. (2018). A neural-level model of spatial memory and imagery.elife,7, e33752.
● Vandrey, B., Duncan, S., & Ainge, J. A. (2021). Object and object‐memory representations across the proximodistal axis of CA1.Hippocampus,31(8), 881-896.








Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P321: Overcoming the space-clamp effect: reliable recovery of local and effective synaptic conductances of neurons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P321 Overcoming the space-clamp effect: reliable recovery of local and effective synaptic conductances of neurons

Ziling Wang1,2,3, David McLaughlin*4,5,6,7,8, Douglas Zhou*1,2,3, Songting Li*1,2,3
1School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
2Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
3Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240, China
4Courant Institute of Mathematical Sciences, New York University, New York, New York 10012
5Center for Neural Science, New York University, New York, New York 10012
6New York University Shanghai, Shanghai 200122, China
7NYU Tandon School of Engineering, New York University, Brooklyn, NY 11201
8Neuroscience Institute of NYU Langone Health, New York University, New York, NY 10016

*Email: david.mclaughlin@nyu.edu, zdz@sjtu.edu.cn, or songting@sjtu.edu.cn
Introduction

To understand the interplay between excitatory (E) and inhibitory (I) inputs in neuronal networks, it is necessary to separate and recover E from I inputs. Somatic recordings are more accessible than those from local dendrites, which poses challenges in recovering input characteristics and distinguishing E from I after dendritic filtering. Somatic voltage clamp methods [1,2] address these issues by assuming an iso-potential neuron. However, as shown in Fig. 1A, this assumption is debated, as the voltage is nonuniform across neurons due to complex morphology [3]. This nonuniform voltage, known collectively as the space clamp effect, leads to inaccurate conductance estimations and even yields erroneous negative conductances [4].


Methods
We study mathematical models of voltage clamping, beginning with an asymptotic analysis of an idealized cable neuron model with realistic time-varying synaptic inputs, and then extending the analysis to simulations of realistic model neurons with varying types, morphologies, and active ion channels. The asymptotic analysis describes in detail the response of the idealized neuron under somatic clamping, and thus captures the discrepancy between the local synaptic conductance on the dendrite, the effective conductances at the soma and the traditional voltage clamp approximation. This discrepancy arises primarily due to the traditional approach’s oversight of the space clamp effect.

Results
With this detailed quantitative understanding of neural response, we refine the traditional method to circumvent the space clamp effect, thus enabling accurate recovery of local and effective conductances from somatic measurements. Specifically, we develop a two-step clamp method that separately recovers the mean and time constants of local conductance on the dendrite when a neuron receives a single synaptic input. Besides, under in-vivo conditions of multiple inputs, we propose an intercept method to extract effective net E and I conductances. Both methods are grounded in perturbation analyses and validated using biologically detailed multi-compartment neuron models with active channels included, as shown in Fig. 1B-1D.

Discussion
Our methods consistently achieve high accuracy in estimating both local and effective conductances through simulations involving various realistic neuron models. Accuracy holds over a broad range of synaptic input strengths, input locations, ionic channels, and receptors. However, two factors can degrade accuracy: large EPSPs and active HCN channels. Large EPSPs, particularly at dendritic tips, require higher-order corrections beyond first-order perturbation theory. Besides, HCN channels also reduce accuracy, but blocking them restores precision. Our approach is robust across various neuron types, as demonstrated in simulations of mPFC fast-spiking neurons, cerebellar Purkinje neurons, and hippocampal pyramidal neurons.





Figure 1. Performance of our method for recovering local and effective conductances in a realistic neocortical layer 5 pyramidal neuron model. (A) Voltage distribution across the pyramidal neuron under somatic voltage clamp condition. (B–D) Our methods perform well in estimating local synaptic conductance features—the mean (B) and time constant (C), as well as the effective conductance at the soma (D).
Acknowledgements
This work was supported by Science and Technology Innovation 2030-Brain Science and Brain-Inspired Intelligence Project (No.2021ZD0200204 D.Z., S.L.); Science and Technology Commission of Shanghai Municipality (No.24JS2810400 D.Z.); National Natural Science Foundation of China (No.12225109, 12071287 D.Z.; 12271361, 12250710674 S.L.) and Student Innovation Center at SJTU (Z.W., D.Z. and S.L.).
References
[1].https://doi.org/10.1038/30735
[2].https://doi.org/10.1016/j.neuron.2011.12.013
[3].https://doi.org/10.1038/nrn2286

[4].https://doi.org/10.1038/nn.2137
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P322: Optogenetic inhibition of a hippocampal network model
Tuesday July 8, 2025 17:00 - 19:00 CEST
P322 Optogenetic inhibition of a hippocampal network model



Laila Weyn*1,2, Thomas Tarnaud1,2, Wout Joseph1, Robrecht Raedt2, Emmeric Tanghe1
1WAVES, Department of Information Technology (INTEC), Ghent University/IMEC, Technologiepark 126, 9000 Ghent, Belgium
24BRAIN, Department of Neurology, Institute for Neuroscience, Ghent University, Corneel Heymanslaan 10, 9000 Gent, Belgium
*Email: laila.weyn@ugent.be


Introduction

Optogenetic inhibition of the hippocampus has emerged as a promising approach for suppressing seizures associated with temporal lobe epilepsy (TLE). Given the substantial size of the hippocampus and the inherent challenges of light propagation within the brain, understanding the influence of the volume and nature of the targeted region is crucial. To address these challenges, anin silicoapproach has been developed; allowing systematic exploration of the impact of different target regions on the effectiveness of optogenetic inhibition of seizure like activity in the hippocampus.
Methods
The hippocampal model described by Aussel et al. (2022) was modified and implemented in NEURON [1,2]. A photocurrent described by the double two-state opsin model was added to excitatory neurons of the Dentate Gyrus (DG_E) and Cornu Ammonis 1 (CA1_E) [3]. The impact of hippocampal sclerosis (HS) and mossy fiber sprouting (MFS) modelling [1] on excitability was assessed via an I/O curve of the CA1_E response to DG_E stimulation. Uncontrolled, self-sustaining, high frequency activity was induced in an epileptic network (MFS = 0.9) by reducing the inhibitory component of the EC theta input (see Fig. 1A). The effect of the target region on optogenetic inhibition was studied by varying the number of CA1_E and DG_E cells receiving a light pulse.
Results
The steeper slope of the population response curve suggests that increased MFS correlates with enhanced excitability. For HS, an inverse relationship is observed (Fig. 1B). When 100% of both DG_E and CA1_E regions is illuminated, all activity within the epileptic network is suppressed (Fig 1C.). Reducing the illumination of DG_E allows the network activity to return to theta activity. Notably, illumination of DG_E alone is insufficient to suppress high-frequency firing. These findings indicate that CA1 serves as a better target region for inhibiting hippocampal activity.
Discussion
The results regarding HS and MFS are in line with those observed by Aussel et al. (2022), though a different type of seizure-like activity is generated. Furthermore, the study shows the importance of selecting the appropriate stimulation region to effectively suppress hippocampal seizures. This preliminary investigation explores the capabilities of the network model but further investigation into the generation of seizure-like activity is necessary. Future work will aim for experimental validation of the model generated seizure-like activity and its response to optogenetic inhibition, with the ultimate aim of optimizing stimulation protocols.





Figure 1. A. Healthy and epileptic network response to EC theta current input and optogenetic modulation of CA1 and DG. B. Population response of CA1_E as a function of DG_E activity after stimulation at varying MFS and HS levels. C. Spike count in CA1_E and DG_E populations during optogenetic modulation (t = 1.75:2.25s) of varying amounts of neurons.
Acknowledgements

This work is supported by BOF project SOFTRESET.


References

[1]https://doi.org/10.1007/s10827-022-00829-5
[2]https://doi.org/10.1017/CBO9780511541612

[3]https://doi.org/10.3389/fncom.2021.688331





Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P323: Modularity and inhibition: the transition from burst-suppression to healthy EEG signals in a microscale model
Tuesday July 8, 2025 17:00 - 19:00 CEST
P323 Modularity and inhibition: the transition from burst-suppression to healthy EEG signals in a microscale model

Guido Wiersma1,*, Michel van Putten1,2, Nina Doorn1
● Department of Clinical Neurophysiology, University of Twente, 7500 AE Enschede, The Netherlands
● Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente, 7500 KA Enschede, The Netherlands



* email: wiersmaguido@gmail.com
Introduction

Burst-suppression (BS) is an electroencephalogram (EEG) pattern consisting of high voltage patterns (bursts, >20 µV) alternated with low voltage or even isoelectric periods (suppression)[1]. It can be categorized into BS with identical bursts, observed in comatose patients after brain ischemia indicating poor prognosis, and BS with heterogeneous bursts[2]. Where past research did not identify the neural origin of BS, recent work showed that the shift of heterogeneous to identical BS is caused by the loss of either inhibition, or modularity in the connectivity between neurons[3]. Here, we hypothesize that when both inhibition and modularity are included in a network, the transition of BS to a healthy network state can be modelled.

Methods
To simulate the pathological and healthy states, a network of 2000 adaptive integrate and fire (IF) neurons is constructed. Such networks are known to generate both BS and a wide variety of healthy characteristics as observed in EEG (e.g. alpha or gamma activity)[4]. The adaptation mechanism of the IF neurons is conductance-based, preventing unrealistically negative membrane voltages during suppression periods as described in e.g.[5]. Inspired by Gao et al., simulation based inference is used to explore a wide variety of dynamics resulting from a broad range of free parameters[4].

Results
The results show the influence of inhibition and modularity on the simulation of BS and healthy network states. Furthermore, by using one channel EEG data as target observations for the parameter inference, combined with the broad parameter range, we show to what extent the proposed microscale model can simulate these target EEG signals.

Discussion
The roles of inhibition and modularity provide new insights into the mechanisms behind the transition of healthy brain states to BS. This opens potential pathways for treatments in comatose patients after ischemia. Although the model consists of only 2000 neurons, the striking similarity between BS patterns generated in-vitro and those observed in EEG recordings highlights the potential of microscopic models to capture features of large-scale brain activity[3,6,7]. This study demonstrates the potential of these biophysically detailed models to uncover cellular level insights from EEG signals.





Acknowledgements
We thank Maurice van Putten, PhD, for his invaluable support, expertise, and generous provision of the code to implement synaptic parallel computing for dynamic load balancing.
References
[1] 1. https://doi.org/10.1097/01.nrl.0000178756.44055.f6
[2] 2. https://doi.org/10.1016/j.clinph.2013.10.017
[3] 3.https://doi.org/10.12751/nncn.bc2024.146
[4] 4. https://doi.org/10.1101/2024.08.21.608969
[5] 5. https://doi.org/10.1162/neco_a_01342.
[6] 6. https://doi.org/10.1152/jn.00316.2002.

[7] 7. https://doi.org/10.1109/TBME.2004.827936.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P324: Brain Criticality Trajectories in Aging: From Cognitive Slowing to Hyperexcitability
Tuesday July 8, 2025 17:00 - 19:00 CEST
P324 Brain Criticality Trajectories in Aging: From Cognitive Slowing to Hyperexcitability

Kaichao Wu*1,Leonardo L. Gollo1,2

1Brain Networks and Modelling Laboratory, The Turner Institute for Brain and Mental Health, School of Psychological Sciences, and Monash Biomedical Imaging, Monash University, Victoria 3168, Australia
2Institute for Cross-Disciplinary Physics and Complex Systems, IFISC (UIB-CSIC), Palma de Mallorca, Campus University de les Illes Baleares, Spain.
*Email: kaichao.wu@monash.edu

Introduction

Brain criticality—the dynamic balance between stability and flexibility in neural activity—is a fundamental property that supports efficient information processing, adaptability, and cognitive function [1-3]. However, how aging influences brain criticality remains a subject of debate, with conflicting findings in the literature [4,5]. Some studies suggest that normal aging shifts neural dynamics toward a subcritical state characterized by reduced neural variability and cognitive slowing [6]. In contrast, others propose that aging may lead to supercritical dynamics, increasing the risk of hyperexcitability and instability[7].
Methods
To reconcile these opposing views, we developed a whole brain neuronal network model that simulates aging as a combination of two processes: healthy aging, which gradually prunes network connections at a steady rate(Figure 1A), and pathological aging, which introduces random lesions that locally alter regional excitability(Figure 1B). This model enables us to track how the distance to criticality(Figure 1C), estimated from temporal correlation length(intrinsic timescales), evolves over time. We find that healthy aging drives the system toward subcriticality, while pathological aging progressively pushes the system toward supercriticality due to lesion accumulation and compensatory excitability changes(Figure 1D).
Results
Our results reveal two distinct trajectories of criticality in aging. In normal aging, where no major disruptions occur, neural dynamics gradually shift toward subcriticality, aligning with empirical findings of diminished neural variability and cognitive slowing in older adults [5]. Conversely, in pathological aging, an initial decline in criticality due to network degradation is followed by a shift toward supercriticality, potentially contributing to hyperexcitable states observed in neurodegenerative diseases.
Discussion
These findings offer a theoretical framework that reconciles previously conflicting results, demonstrating that normal and pathological aging follow distinct criticality trajectories. By identifying key mechanisms underlying these transitions, our model provides insights into early detection of neurodegenerative diseases and highlights potential interventions aimed at preserving critical neural dynamics in aging populations.





Figure 1. Figure 1. Brain criticality trajectories in Aging. (A) Brain network connectivity (K) reduces with normal aging. (B) For pathological aging, excitability within localized brain regions increases. (C) The neuronal network modeling indicates two distinct trajectories for normal and pathological aging. (D) The relationship between intrinsic timescales and criticality.
Acknowledgements
This work was supported by the Australian Research Council (ARC), Future Fellowship (FT200100942), the Rebecca L. Cooper Foundation (PG2019402), the Ramón y Cajal Fellowship (RYC2022-035106-I) from FSE/Agencia Estatal de Investigación (AEI), Spanish Ministry of Science and Innovation, and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M.
References


1.Cocchi, L., et al. (2017).https://doi.org/10.1016/j.pneurobio.2017.07.002.
2.Munoz, M. A. (2018).https://doi.org/10.1103/RevModPhys.90.031001
3.O’Byrne, et al. (2022). https://doi.org/10.1016/j.tins.2022.08.007.
4.Zimmern, V. (2020). https://doi.org/10.3389/fncir.2020.00054
5.Heiney, K., et al. (2021).https://doi.org/10.3389/fncom.2021.611183
6.Wu, K., et al. (2025). https://doi.org/10.1038/s42003-025-07517-x.
7.Fosque, L. J., et al. (2022).https://doi.org/10.3389/fncom.2022.1037550

8.Garrett, D. D., et al. (2013).https://doi.org/10.1093/cercor/bhs055.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P325: Disrupted Temporal Dynamics in Stroke: A Criticality Framework for Intrinsic Timescales
Tuesday July 8, 2025 17:00 - 19:00 CEST
P325 Disrupted Temporal Dynamics in Stroke: A Criticality Framework for Intrinsic Timescales

Kaichao Wu*1, Leonardo L. Gollo1,2

1Brain Networks and Modelling Laboratory, The Turner Institute for Brain and Mental Health, School of Psychological Sciences, and Monash Biomedical Imaging, Monash University, Victoria 3168, Australia
2Institute for Cross-Disciplinary Physics and Complex Systems, IFISC (UIB-CSIC), Palma de Mallorca, Campus University de les Illes Baleares, Spain.
*Email: kaichao.wu@monash.edu
Introduction

Stroke profoundly disrupts brain function [1-3], yet its impact on temporal dynamics—critical for efficient information processing and recovery—remains poorly understood. Intrinsic neural timescales (INT), which quantify the temporal persistence of neural activity, offer a valuable framework for investigating these dynamic alterations [4,5]. However, the extent to which stroke influences INT and the mechanisms underlying these changes remain unclear.


Methods
This study leverages a longitudinal dataset comprising 15 ischemic stroke patients who underwent resting-state functional MRI at five evenly spaced intervals over six months. INT was computed by estimating the area under the positive autocorrelation function of BOLD signal fluctuations across whole-brain regions [6]. We compared stroke patients' INT values to those of age-matched healthy controls to assess lesion-induced disruptions. Additionally, we analyzed the hierarchical organization of INT across functional networks and examined its relationship with motor recovery, classifying patients into good and poor recovery groups based on clinical assessments. To explore potential mechanisms, we modeled networks of excitable spiking neurons using the Kinouchi & Copelli framework [6,7], investigating the causal relationship between neural excitability and INT within a criticality framework (Fig. 1).
Results
Our findings revealed that stroke patients exhibited significantly prolonged INT compared to healthy controls, a pattern that persisted across all recovery stages. The hierarchical structure of INT, which reflects balanced specialization across brain networks, was markedly disrupted in the early post-stroke phase. By two months post-stroke, differences in INT trajectories emerged between recovery groups, with poor recovery patients displaying abnormally prolonged INT, particularly in the dorsal attention, language, and salience functional networks. These findings align with theoretical predictions from excitable neuron network models, which suggest that stroke lesions may shift the brain’s dynamics toward criticality or even into the supercritical regime (Fig. 1).
Discussion
Our results indicate that stroke-induced INT prolongation reflects increased neural network excitability, pushing the brain toward criticality or even into a supercritical state. The persistent INT abnormalities observed in poorly recovering patients suggest that early-stage INT alterations could serve as prognostic biomarkers for long-term functional outcomes. These findings provide insights into stroke-induced disruptions of brain criticality and highlight the potential of non-invasive neuromodulatory interventions to restore normal INT and facilitate recovery [5]. By advancing our understanding of temporal dynamic changes in stroke, this work sheds light on post-stroke neural reorganization and opens new avenues for targeted rehabilitation strategies using non-invasive brain stimulation.




Figure 1. Figure 1. Stroke lesions prolong intrinsic neural timescales and alter network dynamics, shifting them from a slightly subcritical state (blue) toward criticality (red), with the potential to enter a supercritical state. Near a phase transition, cortical network dynamics can be modeled as a branching process, where intrinsic neural timescales peak at the critical point[6].
Acknowledgements
This work was supported by the Australian Research Council (ARC), Future Fellowship (FT200100942), the Rebecca L. Cooper Foundation (PG2019402), the Ramón y Cajal Fellowship (RYC2022-035106-I) from FSE/Agencia Estatal de Investigación (AEI), Spanish Ministry of Science and Innovation, and the María de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M.
References
1.Carrera, E., & Tononi, G. (2014).https://doi.org/10.1093/brain/awu191
2.Park, C.-h., Chang, W. H., Ohn, S. H., et al. (2011). https://doi.org/10.1161/STROKEAHA.110.603846
3.Volz, L. J., Rehme, A. K., Michely, J., et al. (2016). https://doi.org/10.1093/cercor/bhv136
4.Golesorkhi, M., et al. (2021). https://doi.org/10.1038/s41522-021-00447-z
5.Gollo, L. L. (2019). https://doi.org/10.7554/eLife.45089.
6. Wu, K., & Gollo, L. L. (2025).https://doi.org/10.1038/s41522-025-00875-2

7.Kinouchi, O., & Copelli, M. (2006). https://doi.org/10.1038/nphys292
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P326: Modeling language evolution with spin glass dynamics
Tuesday July 8, 2025 17:00 - 19:00 CEST
P326 Modeling language evolution with spin glass dynamics

Hediye Yarahmadi*1, Alessandro Treves1

1Cognitive Neuroscience, SISSA, Trieste, Italy

*Email: hediye.yarahmadi@sissa.it

Introduction

Recent advances in phylogenetic linguistics by Longobardi and colleagues [1], based on syntactic parameters, seem to reconstruct language evolution farther in the past than traditional etymological approaches. Combined with quantitative statistics, this Parametric Comparison Method also raises general questions: why does syntax keep changing? Why do languages diversify instead of converging into efficient forms? And why is this change so slow, over centuries? We hypothesize that the fundamental reasons are disorder and frustration: syntactic parameters interact through disordered interactions, subject to weak external drives and, unable to settle into a state fully compatible with all interactions, they evolve slowly with “glassy” dynamics.

Methods
To explore such hypothesis, we model a “language” as a binary vector of the 94 syntactic parameters considered in the Longobardi database, and assume that they interact both through the explicit and asymmetric dependencies that linguists call “implications” (which may lead to rotating changes [2]) and through weak, partly asymmetric interactions, which we assign at random with a relative strengthσand a degree of asymmetryφranging from 0° (symmetric) to 90° (fully antisymmetric). Using Glauber dynamics, we simulate the evolution of these parameters, assuming external fields to only set the initial conditions. We then introduce a Hopfield-like symmetry component to the interaction term, expected to glassify syntax dynamics further.

Results
(Fig. 1a) sketches the (φ,σ,γ=0) phase diagram based on simulations of the average number of parameters flip at the 100thtime step. Syntactic parameters get trapped in a steady state (one of a disordered multiplicity) for low asymmetry, while they continue to evolve for higher asymmetry. The strength of random interactions is almost irrelevant, but when they dominate (σ→∞),the transition is sharp atφ=30°. For lowσ,dynamics slow, but atσ≡0 they continue indefinitely: implications alone allow no steady state. (Fig. 1b) presents the phase diagram in (φ=90°,σ,γ) space, showing a transition from a glassy to a chaotic state. The balance between symmetry and asymmetry is crucial, and a large γ stabilizes the system via the Hopfield term.

Discussion
The sharp transition atφ=30° forσ→∞ and γ→0aligns with previous studies of asymmetric spin glasses [3] (η=1/2 in their notation), indicating that varying the interaction symmetry induces a phase transition from glassy to chaotic dynamics. This suggests that to understand language evolution in the syntax domain it is essential to include, along the implicational structure constraining parameter changes, disordered interactions which have so far eluded linguistic analysis, in part because of their quantitative rather than logical nature. We are now working on integrating this Hopfield-like structure, which brings languages closer to metastable states.





Figure 1. Phase diagrams: (a) At γ=0 in the σ-φ plane, the system freezes with symmetric interactions (up to φ≈30°) and becomes fluid as asymmetry increases for large σ. Similar behavior occurs to σ→0, but with slower fluid dynamics, and with σ ≡ 0, it is chaotic. (b) At φ=90° in the σ-γ plane, freezing occurs for γ/σ > 0.01, becoming fluid as the Hopfield term decreases. Symmetry balance is key.
Acknowledgements
We would like to express our sincere gratitude to G. Longobardi for providing access to the database used in this study.
References
[1]Ceolin A, Guardiano C, Longobardi G et al (2021).At the boundaries of syntactic prehistory.Phil Trans Roy Soc B,376(1824), 20200197. http://doi.org/10.1098/rstb.2020.0197
[2] Crisma P, Fabbris G, Longobardi G & Guardiano C (2025).What are your values? Default and asymmetry in parameter states.J Historical Syntax,9, 1-26.https://doi.org/10.18148/hs/2025.v9i2-10.182

[3]Nutzel K & Krey U (1993). Subtle dynamic behaviour of finite-size Sherrington-Kirkpatrick spin glasses with nonsymmetric couplings.J Physics A: Math Gen,26, L591.https://10.1088/0305-4470/26/14/011.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P327: Deciphering the Dynamics of Memory Encoding and Recall in the Hippocampus Using Information Theory and Graph Theory
Tuesday July 8, 2025 17:00 - 19:00 CEST
P327 Deciphering the Dynamics of Memory Encoding and Recall in the Hippocampus Using Information Theory and Graph Theory

Jess Yu*1, Hardik Rajpal2, Mary Ann Go1, Simon Schultz1

1Department of Bioengineering and Centre for Neurotechnology, Imperial College London, United Kingdom, SW7 2AZ
2Department of Mathematics and Centre for Complexity Science, Imperial College London, United Kingdom, SW7 2AZ

*Email: jin.yu21@imperial.ac.uk

Introduction
Alzheimer's disease (AD) profoundly impairs spatial navigation, a critical cognitive function dependent on hippocampal processing. While previous studies have documented the deterioration of place cell activity in AD, the mechanisms by which AD disrupts information processing across neural populations remain not fully understood. Traditional analyses focusing on individual neurons fail to capture the collective properties of neural circuits. We hypothesized that AD pathology disrupts not only individual cellular encoding but also the integration and sharing of spatial information across functional neuronal assemblies, leading to compromised spatial navigation.
Methods
We analysed hippocampal CA1 neural recordings from two-photon calcium imaging from AD and wild-type (WT) mice, both young and old, during spatial navigation tasks in familiar environments and novel environments. At the single-cell level, we quantified spatial information using mutual information (MI) between neural spikes and location, and partial information decomposition (PID) [1] for the pair of neurons and location. For population-level analysis, we constructed functional networks using pairwise MI, identified stable functional neuronal assemblies using Markov Stability detection [2], and applied PID to quantify how assemblies collectively encode spatial information through redundancy, synergy, joint mutual information, and redundancy-Synergy Index.
Results
Our analysis revealed multi-scale disruption of spatial information processing in AD. At the single-cell level, AD-Old (ADO) mice showed significantly reduced spatially informative neurons and lower spatial information content. At the assembly level, we uncovered profound deficits in information integration. ADO assemblies showed significantly reduced redundancy and synergy compared to WT-Young controls, indicating impaired information sharing. The Redundancy-Synergy Index revealed a significant shift in the balance between redundant and synergistic processing across neural assemblies.
Discussion
These findings provide novel insights into how AD disrupts neural information processing across multiple scales. The parallel degradation of both cellular encoding and assembly-level information integration suggests a compound effect of AD pathology on spatial navigation circuits. The reduced information sharing between assemblies points to a breakdown in coordinated activity necessary for effective spatial navigation. This multi-scale information-theoretic approach reveals that AD impairs not just individual neural responses but the mechanisms by which neural assemblies integrate spatial information, potentially guiding development of assembly-level therapeutic strategies.



Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) through the Physics of Life grant [EP/W024020/1].
References
[1] Williams, P. L., & Beer, R. D. (2010). Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515. https://doi.org/10.48550/arXiv.1004.2515
[2] Delvenne, J.-C., Yaliraki, S. N., & Barahona, M. (2008). Stability of graph communities across time scales. arXiv preprint arXiv:0812.1811. https://doi.org/10.48550/arXiv.0812.1811
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P328: Modelling the impacts of Alzheimer’s Disease and Aging on Self-Location and Spatial Memory
Tuesday July 8, 2025 17:00 - 19:00 CEST
P328 Modelling the impacts of Alzheimer’s Disease and Aging on Self-Location and Spatial Memory

Aleksei Zabolotnii*1, Chrsitian F. Doeller1,2,3, Andrej Bicanski1,3

1Department of Psychology, Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
2Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
3Wilhelm Wundt Institute for Psychology, Leipzig University, Germany

*Email: zabolotnii@cbs.mpg.de
Introduction

Spatial navigation relies on the precise coordination of multiple neural circuits, particularly in the entorhinal cortex (EC) and hippocampus (HPC). Grid cells in the EC play a critical role in path integration, while place cells in the HPC encode specific locations. Dysfunction in these systems is increasingly linked to cognitive decline in aging and Alzheimer’s disease (AD)1. Early AD is characterized by EC dysfunction, including impaired neuronal activity and deficits in spatial navigation, even before neurodegeneration becomes evident2. Similarly, cognitive decline comes with aging and affects navigational computations3. Here we investigate both kinds of deficits in a mechanistic systems-level model of spatial memory.

Methods
We extend the BB-model of spatial cognition4 with a biologically plausible variant of continuous attractor network (CAN) model of grid cells5 and investigate the effect of perturbations on grid cells and the wider spatial memory system. Specifically, we investigate the stability against synaptic weight variability, and neuronal loss, the former (to a first approximation) more akin to age-related neural degradation, and the latter mimicking AD-associated neurodegeneration. To quantify the impact of these perturbations, we analyzed the propagation of degraded spatial representations to downstream hippocampal and extra-hippocampal circuits and evaluate changes in the accuracy of self-location decoding from grid cells.
Results
We demonstrate that our biologically plausible grid cell model can cope with neural loss and changes in synaptic weights, both of which lead to distortions of the activity pattern on the grid cell sheet. Positional decoding degrades gracefully. We also observe the propagation of distorted spatial representations to downstream areas during the imagery-associated mode of the BB-model, as well as deficits in object-location memory.
Discussion
Our model demonstrates for the first time in a mechanistic model how neural degenerative processes affect spatial accuracy. As damaged EC populations produce distorted activity, it causes imprecise firing of place cells as well as leads to forming distorted memories for locations of novel objects in the environment. Due to changes in the CAN, population activity vectors are unable to provide a correct and unique code for every location in space compared to those in the healthy system, linking our model to the spatial behavior of AD patients and aging adults.



Acknowledgements
Aleksei Zabolotnii acknowledges the DoellerLab and the Neural Computation Group. Andrej Bicanski and Christian F. Doeller acknowledge funding from the Max Planck Society
References
1. https://doi.org/10.1126/science.aac8128
2. https://doi.org/10.1016/j.cub.2023.09.047
3. https://doi.org/10.1016/j.neuron.2017.06.037
4. https://doi.org/10.7554/eLife.33752
5. https://doi.org/10.1371/journal.pcbi.1000291
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P329: The effect of overfitting on spatial perception and flight trajectories in pigeons
Tuesday July 8, 2025 17:00 - 19:00 CEST
P329 The effect of overfitting on spatial perception and flight trajectories in pigeons

Margarita Zaleshina*1, Alexander Zaleshin2

1Moscow Institute of Physics and Technology, Moscow, Russia
2Institute of Higher Nervous Activity and Neurophysiology, Moscow, Russia

*Email: zaleshina@gmail.com

Introduction

Problems of overfitting in trained systems concern not only artificial neural networks, but also living organisms and humans. Pre-trained templates can reduce processing time, but increase errors in real dynamic situation. Conventional models often use not new, but current templates with distortion, addition, prolongation. Due to overfitting data can be misinterpreted, relevant data can be filtered out [1].

In our work we study overfitting in pigeon flights. These birds often use accumulated knowledge and route-finding algorithms (guided by beacons, long roads, loft-like buildings) [2]. EEG activities in a familiar situation differ from brain activity in new conditions, it can be observed with Neurologgers and GPS trackers [3].
Methods
We compared GPS tracks and brain activity of untrained and trained pigeons flying over landscapes with different information loads: near sea coast, over rural or urbanized areas. Source materials were selected from the Dryad Digital Repository and Movebank Data Repository.
We calculated brain frequencies and their changes; standard deviation from average flight path; frequency of surveying (loops in trajectories); percentage of detectable "points of interest" (Fig. 1).
Spatial analysis of GPS tracks, detection of landscape boundaries and detection of special points were performed using the QGIS.
To identify overfitting, we computed a decrease in the flexibility of individual flights and a decrease in the power of high-frequency EEG.
Results
Brain activity was most pronounced near the loft and least pronounced when pigeons flew along known routes along homogeneous terrains or extended objects. Additionally, high brain activity and surveying were demonstrated by pigeons when examining points of interest or when moving from one type of landscape to another, even by trained pigeons.
Trained pigeons more often preferred to fly along known track, even if it differed from the shortest route. In overfitting flights, surveying and standard deviations from the average flight track and changes in flight direction were minimal. Overfitting flights were often observed over rural terrain, less often in the coastal zone. In flocks the frequency of overfitting cases increased.
Discussion
Importance of overfitting is especially significant in modern conditions of accelerated emergence and use of "big" digital data. Excessive templates and strict filters can often lead to errors or to significantly limit the variability. Usage of multilayer data sources allows to accommodate and vary different planes of view, or context basic points, which helps reduce overfitting.
Studying of flight pigeons paths demonstrate relationship between external environment, chosen behavior and internal settings of trained birds. Surveying increases an ability to navigate in dynamical cases or to find interesting locations.

In future we plan to continue studying of surveying and multilayer data exchange to reduce the overfitting problem.




Figure 1. Typical cases of pigeon flight and pigeon EEG-power: trained pigeon, trained pigeon, pigeon near the point of interest, pigeon after overfitting
Acknowledgements
-
References
1. Zaleshina, M. & Zaleshin, A. (2024). Spatial Learning and Overfitting in Visual Recognition and Route Planning Tasks. IJCCI & NCTA. 1: 576-583.
2. Blaser, N. et al. (2013). Testing Cognitive Navigation in Unknown Territories: Homing Pigeons Choose Different Targets. Journal of Experimental Biology. 216(16):3123–31.
3. Ide, K. & Takahashi, S. (2022). A Review of Neurologgers for Extracellular Recording of Neuronal Activity in the Brain of Freely Behaving Wild Animals. Micromachines.13(9):1529.

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P330: Quantitative Analysis of Artificial Intelligence Integration in Neuroscience
Tuesday July 8, 2025 17:00 - 19:00 CEST
P330 Quantitative Analysis of Artificial Intelligence Integration in Neuroscience

1Cate School, 1960 Cate Mesa Road, Carpinteria, CA, USA
2Department of Computer Science, Missouri State University, Springfield, MO, USA

*Email: trojancz@hotmail.com

Introduction

This study aimed to quantitatively assess the integration of artificial intelligence (AI) into neuroscience. By analyzing ~50,000 sample papers from the OpenAlex database[1], this study captured the breadth of AI applications across diverse disciplines of neuroscience and gauged emerging trends in research.

Methods
A dual-query strategy was applied. One query targeted neuroscience papers (2001-2022) mentioning AI‐related terms (Figure 1), while a control query used “neuroscience.” An automated classification pipeline, built on a prompted GPT‑4o model[2], dynamically processed titles and abstracts, and classified the papers into 6 categories: Behavioral Neuroscience, Cognitive Neuroscience, Computational Neuroscience, Neuroimaging, Neuroinformatics, and Unrelated to Neuroscience. Following classification, papers were aggregated by publication year and normalized via three strategies: division by totals in each discipline, division by annual OpenAlex counts, and a combined normalization method of the above two. See Figure 1 for the workflow chart.
Results
Analysis revealed a dramatic surge from 2015 to 2022 in Computational Neuroscience (12% increase per year), Neuroinformatics (18 % increase per year), and Neuroimaging (10% increase per year), whereas Cognitive and Behavioral Neuroscience displayed a plateau from 2013 to 2022 with slight declines afterward (Figure 1).
Discussion
Findings underscore the heterogeneous integration of AI across neuroscience disciplines, suggesting distinct developmental trajectories and new avenues for interdisciplinary research. The surge in AI applications post-2015 appears driven by advances in computational power, algorithmic innovations, and data availability, accelerating research in Computational Neuroscience, Neuroinformatics, and Neuroimaging[3]. Conversely, the plateau in Cognitive and Behavioral Neuroscience after 2013 may reflect shifting priorities or methodological challenges. These results can guide future studies to target underexplored intersections and inform strategic investments in emerging fields.




Figure 1. Data processing and analysis workflow (left); Number of Publication per year (top right); Yealy number of publication normalized by total publications (2001-2022) of each corresponding category.
Acknowledgements
We gratefully acknowledge the resources provided by OpenAlex and OpenAI. Their platforms enabled the data acquisition and automated classification essential to this bibliometric study.
References
[1] OpenAlex. (n.d.). OpenAlex: A comprehensive scholarly database. Retrieved from https://openalex.org
[2] OpenAI. (2024, May 13). GPT‑4o API [Large language model]. Retrieved from https://openai.com/api
[3] Tekin, U., & Dener, M. (2025). A bibliometric analysis of studies on artificial intelligence in neuroscience.Frontiers in Neurology,16:1474484.https://doi.org/10.3389/fneur.2025.1474484
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P331: Vitrual Brain Inference(VBI): A Toolkit for Probabilistic Inference in Virtual Brain Models
Tuesday July 8, 2025 17:00 - 19:00 CEST
P331 Vitrual Brain Inference(VBI): A Toolkit for Probabilistic Inference in Virtual Brain Models

Abolfazl Ziaeemehr*¹, Marmaduke Woodman¹, Lia Domide², Spase Petkoski¹, Viktor Jirsa¹, Meysam Hashemi¹

¹ Aix Marseille Univ, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
² Codemart, Cluj-Napoca, Romania*Email: abolfazl.ziaee-mehr@univmail.com


IntroductionUnderstanding brain dynamics requires accurate models that integrate neural activity and neuroimaging data. Virtual brain modeling has emerged as a powerful approach to simulate brain signals based on neurobiological mechanisms. However, solving the inverse problem of inferring brain dynamics from observed neuroimaging data remains a challenge. The Virtual Brain Inference (VBI) [1] toolkit addresses this need by offering a probabilistic framework for parameter estimation in large-scale brain models. VBI combines neural mass modeling with simulation-based inference (SBI) [2] to efficiently estimate generative model parameters and uncover underlying neurophysiological mechanisms.MethodsVBI integrates structural and functional neuroimaging data to build personalized virtual brain models. The toolkit supports various neural mass models, including Wilson-Cowan, Montbrió, Jansen-Rit, Stuart-Landau, Wong-Wang, and Epileptor. Using GPU-accelerated simulations, VBI extracts key statistical features such as functional connectivity (FC), functional connectivity dynamics (FCD), and power spectral density (PSD). Deep neural density estimators, such as Masked Autoregressive Flows (MAFs) and Neural Spline Flows (NSFs), are trained to approximate posterior distributions. This SBI approach allows efficient inference of neural parameters without reliance on traditional sampling-based methods.
ResultsWe demonstrate VBI’s capability by applying it to simulated and real neuroimaging datasets. The probabilistic inference framework accurately reconstructs neural parameters and identifies inter-individual variability in brain dynamics. Compared to traditional methods like Markov Chain Monte Carlo (MCMC) [3] and Approximate Bayesian Computation (ABC), VBI achieves superior scalability and efficiency. Performance evaluations highlight its robustness across different brain models and noise conditions. The ability to generate personalized inferences makes VBI a valuable tool for both research and clinical applications [4], aiding in the study of neurological disorders and cognitive function. Look at Fig.1 for the workflow.
DiscussionVBI provides an efficient and scalable solution for inferring neural parameters from brain signals, addressing a critical gap in computational neuroscience. By leveraging SBI and deep learning, VBI enhances the interpretability and applicability of virtual brain models. This open-source toolkit offers researchers a flexible platform for modeling, simulation, and inference, fostering advancements in neuroscience and neuroimaging research.




Figure 1. Overview of the VBI workflow: (A) A personalized connectome is constructed using diffusion tensor imaging and a brain parcellation atlas. (B) This serves as the foundation for building a virtual brain model, with control parameters sampled from a prior distribution. (C) VBI simulates time series data corresponding to neuroimaging recordings. (D) Summary statistics, including functional connectivit
Acknowledgements
This research was funded by the EU’s Horizon 2020 Programme under Grant Agreements No. 101147319 (EBRAINS 2.0), No. 101137289 (Virtual Brain Twin), No. 101057429 (environMENTAL), and ANR grant ANR-22-PESN-0012 (France 2030). We acknowledge Fenix Infrastructure resources, partially funded by the EU’s Horizon 2020 through the ICEI project (Grant No. 800858).

References
1.https://doi.org/10.1101/2025.01.21.633922
2. https://doi.org/10.1073/pnas.1912789117
3.https://doi.org/10.3150/16-BEJ810
4.https://10.1088/2632-2153/ad6230



Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P332: Relating Input Resistance and Sodium Conductance
Tuesday July 8, 2025 17:00 - 19:00 CEST
P332 Relating Input Resistance and Sodium Conductance

Laura Zittlow*1, Erin Munro Krull1, Lucas Swanson1
1Mathematical Sciences, Ripon College, Ripon, WI, US*E-mail: laurazittlow@gmail.com
Introduction

The sodium conductance density (gNa) determines an axon’s ability to propagate APs. APs do not propagate if the gNa is too low, while they easily do if the gNa is high. Therefore, there is a sodium conductance density threshold (gNaT) [1]. Preliminary results suggest the gNaT for axons with simple morphologies linearly predicts gNaT for axons with more complex morphologies [2, 3]. To address axons with very complex morphologies, we decided to compare gNaT to input resistance (Rin). Rin, defined as the steady-state voltage to injected current ratio, inherently accounts for the axon’s morphology and electrical properties [4].

Methods
We use NEURON simulations [5] to model Rin and AP propagation from an axon collateral to the end of the main axon. We varied the morphology of an extra side branch to see the effect of axon morphology on Rin and gNaT. For each simulation, we find the Rin and gNaT. We evaluate the impact of location for lengths of 0-6𝜆, several side branch morphologies and lengths from 0-6𝜆, and the location and length of sub-branches.
Results
Our simulations show a 1-1 correspondence between Rin and gNaT under specific morphological changes, modeled as a smooth function. Branch location and length affect Rin and gNaT inversely, with their effects stabilizing as the distances and lengths increase. However, when a short side branch connects at the same point as the simulated branch, an abnormality–“bouncing”–occurs. Because shorter side branches are easier to stimulate, the AP can temporarily move into the said branch and thenbounceout. If only one variable (distance or morphology) changes, the error difference is 10-4in gNaT for a given Rin. However, if “bouncing” occurs, then the error difference is on the scale of 10-2.
Discussion
Our results indicate Rinand gNaTrespond monotonically to changes in axonal morphology unless “bouncing” occurs. This suggests Rincould serve as an alternative measure for axonal morphology when predicting gNaT. It offers a computationally efficient method for estimating gNaT. However, “bouncing” disrupts the smooth relationship between Rinand gNaTby making AP propagation more likely. Moving forward, we aim to compare Rin across more complex morphologies. Additionally, we plan to curve-fit the Rin-gNaT relationship and test it against the linear estimation method and realistic axonal morphologies.




Acknowledgements
Thank you to my mentor Dr. Erin Munro Krull and the rest of the Ripon College Mathematical Sciences department for the advice and guidance. Also, thank you to Ripon College's Summer Opportunities for Advanced Research (SOAR) program and the many donors who help fund the program.
References
[1]https://doi.org/10.1152/jn.00933.2011
[2]https://doi.org/10.1186/s12868-018-0467-3
[3]https://doi.org/10.1186/s12868-018-0467-3
[4] Carnevale, N. T., & Hines, M. L. (2006).The NEURON book.Cambridge University Press.
[5] Tuckwell, H. C. (1988).Introduction to theoretical neurobiology: Volume 1. Linear cable theory and dendritic structure.Cambridge University Press.
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P333: Synaptic transmission during ischemia and recovery: a biophysical model including the complete glutamate-glutamine cycle
Tuesday July 8, 2025 17:00 - 19:00 CEST
P333 Synaptic transmission during ischemia and recovery: a biophysical model including the complete glutamate-glutamine cycle

Hannah van Susteren1, Christine R. Rose2, Hil G.E. Meijer1,Michel J.A.M. van Putten3,4

1Department of Applied Mathematics, University of Twente, Enschede, the Netherlands
2Institute of Neurobiology, Heinrich Heine University, Düsseldorf, Germany
3Clinical Neurophysiology group, department of Science and Technology,University of Twente, Enschede, the Netherlands
4MedischSpectrum Twente, Enschede, the Netherlands

Email:h.vansusteren@utwente.nl
Introduction

Cerebral ischemia is a condition in which blood flow and oxygen supply are restricted. Consequences range from synaptic transmission failure to (ir)reversible neuronal damage [1,2]. However, theinterplay of all the different effects of ischemia on synaptic transmission remains unknown. Excitatory synaptictransmission relies on the energy-dependent glutamate-glutamine (GG) cycle, which enables glutamate recycling via the astrocyte.We have constructed a detailed biophysical model that includes the first implementation of the complete GG cycle. Our model enables us to investigate the malfunction of synaptic transmission during ischemia and during recovery.

Methods
We extend the model in [3] and consider a presynaptic neuron and astrocyte in a finite extracellular space (ECS), surrounded by an oxygen bath as a proxy for energy supply (Fig. 1A). We consider sodium, potassium, chloride and calcium ion fluxes with corresponding channels and transporters such as the sodium-potassium ATPase. To model synaptic transmission, we combine calcium-dependent glutamate release with uptake by the excitatory amino acid transporter and the GG cycle. This cycle includes glutamine synthesis, glutamine transport and glutamate synthesis. We simulate ischemia by lowering the oxygen concentration in the bath. Furthermore, we simulate candidate recovery mechanisms involved in the recovery of physiological dynamics.
Results
We simulate severe ischemia by blocking energy supply for five minutes. In this scenario, the neuron enters a depolarization block (Fig. 1B). Repeated glutamate release and changes in ion concentrations result in toxic levels of glutamate in the ECS (Fig. 1C). The GG cycle is impaired due to malfunction of energy-dependent glutamine synthesis. Once energy supply is restored, the neuron remains depolarized and synaptic transmission disrupted. A candidate recovery mechanism is the blockade of the neuronal transient sodium channel. As a result, ion gradients recover, and glutamate clearance is restored. Electrical stimulation generates action potentials and physiological glutamate release, demonstrating full recovery of synaptic transmission.
Discussion
With our computational model that includes the first implementation of the GG cycle, we can simulate neuronal and astrocytic dynamics during ischemia and recovery. An important finding is that extreme glutamate accumulation is caused by ionic imbalances, and not only by excessive glutamate release. Furthermore, the GG cycle is disrupted due to impaired glutamine synthesis. In conclusion, our detailed model provides insight into the causes of excitatory synaptic transmission failure and suggestions for potential recovery mechanisms.





Figure 1. Figure 1: (A) Schematic overview of the model. (B) Membrane potentials and (C) extracellular glutamate during oxygen deprivation (grey area), sodium block (yellow area) and stimulation (dashed line).
Acknowledgements
This study was supported by the funds from the Deutsche Forschungsgemeinschaft (DFG), FOR2795 ‘Synapses under stress’.
References
1.https://doi.org/10.1016/j.neuropharm.2021.108557
2.https://doi.org/10.3389/fncel.2021.637784
3.https://doi.org/10.1371/journal.pcbi.1009019

Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

17:00 CEST

P334: Connectivity-based tau propagation and PET microglial activation in the Alzheimer’s disease spectrum
Tuesday July 8, 2025 17:00 - 19:00 CEST
P334 Connectivity-based tau propagation and PET microglial activation in the Alzheimer’s disease spectrum

Marco Öchsner1*, Matthias Brendel2,3, Nicolai Franzmeier4, Lena Trappmann⁵, Mirlind Zaganjori⁵, Ersin Ersoezlue⁵, Estrella Morenas-Rodriguez5,6, Selim Guersel5,6, Lena Burow⁵, Carolin Kurz⁵, Jan Haeckert5,8, Maia Tatò⁵, Julia Utecht⁵, Boris Papazov⁹, Oliver Pogarell⁵, Daniel Janowitz⁴, Katharina Buerger4,6, Michael Ewers⁴, Carla Palleis3,6,10, Endy Weidinger¹⁰, Gloria Biechele2, Sebastian Schuster², Anika Finze², Florian Eckenweber², Rainer Rupprecht¹¹, Axel Rominger2,12, Oliver Goldhardt13, Timo Grimmer13, Daniel Keeser1,5,9, Sophia Stoecklein⁹, Olaf Dietrich⁹, Peter Bartenstein2,3, Johannes Levin3,6,10, Günter Höglinger6,14, Robert Perneczky1,3,5,6,15,16and Boris-Stephan Rauchmann1,5,6,16



Department of Neuroradiology, LMU University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Nuclear Medicine, University Hospital, Ludwig Maximilian University of Munich, Germany

Munich Cluster for Systems Neurology, Munich, Germany

Institute for Stroke and Dementia Research, University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Psychiatry and Psychotherapy, LMU University Hospital, Ludwig Maximilian University of Munich, Germany

German Center for Neurodegenerative Diseases, Munich, Germany

Biomedical Center, Faculty of Medicine, Ludwig Maximilian University of Munich, Germany

Department of Psychiatry, Psychotherapy, and Psychosomatics, University of Augsburg, Germany

Department of Radiology, LMU University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Neurology, University Hospital, Ludwig Maximilian University of Munich, Germany

Department of Psychiatry and Psychotherapy, University of Regensburg, Germany

Department of Nuclear Medicine, University of Bern, Inselspital, Bern, Switzerland

Department of Psychiatry and Psychotherapy, Rechts der Isar Hospital, Technical University of Munich, Germany

Department of Neurology, Hannover Medical School, Germany

Ageing Epidemiology Research Unit, School of Public Health, Imperial College London, United Kingdom

Sheffield Institute for Translational Neuroscience, University of Sheffield, Sheffield


* Email: marco.oechsner@med.lmu.de



Introduction
Microglial activation is increasingly recognized as central to Alzheimer's disease spectrum (ADS) progression, potentially influencing or responding to pathological tau accumulation [1]. Recent evidence suggests microglial activation and tau pathology spread along highly interconnected brain regions, implying connectivity-driven propagation mechanisms [2]. Yet, the impact of changes in microglial activation for Tau accumulation remain unclear. We aimed to determine: (a) longitudinal differences in microglial activation between ADS and healthy controls (HC), (b) relationships between microglial activation changes and tau accumulation, and (c) how these changes affect functional connectivity based relationships between Tau and microglial activation.
Methods
As part of the longitudinal ActiGliA prospective cohort study [3], [18F]GE-180 TSPO (microglia) PET, [18F]Flutemetamol (Tau) PET, resting-state fMRI, and structural MRI in ADS (n=36; defined by CSF Aβ42/Aβ40 ratio or an Aβ PET composite of ADS) and HC (n=20; with CDR=0 and no Aβ pathology) at baseline and 18-month follow-up (n=6 each). PET imaging was intensity-normalized to cerebellar gray matter, and SUVR values extracted based on the Schaefer200 parcellation. fMRI preprocessing (fMRIPrep v1.2.1) was used to derive atlas-based, r-to-z-transformed functional connectivity matrices, after filtering, smoothing, and confound regression. Group comparisons and correlations utilized Cohen’s d, Mann-Whitney U, linear regression and Spearman’s ρ.
Results
Baseline TSPO was lower (d=-1.05, p<0.01) and tau higher (d=1.69, p<0.01) in ADS vs. HC. TSPO strongly correlated with tau levels in both groups (ADS:ρ=0.69, HC:ρ=0.86, p<0.01). Over 18 months, TSPO SUVRs increased significantly more in ADS compared to HC (d=2.61, p<0.01). Increased TSPO ratios (β=-1.4, ρ=-0.14, p=0.03), and ADS-HC TSPO ratio difference (β=-1.43, ρ=-0.25, p<0.01) correlated negatively with tau levels in ADS, while in HC only with HC-ADS ratio differences (β=0.47, p<0.01, ρ=0.13). In ADS, high TSPO-change regions showed significant negative connectivity correlations with tau (β=-2.36, ρ=-0.50, p<0.01), while high Tau regions showed only a weak connectivity based association with TSPO ratios, relationships absent in HC.
Discussion
Our findings indicate a longitudinal increase in microglial activation in ADS, despite initially lower activation compared to HC. Higher baseline microglial activation correlated with tau accumulation, particularly in regions differentiating ADS from HC. However, tau levels negatively correlated with longitudinal TSPO changes, suggesting limited further microglial activation in regions already exhibiting elevated baseline activation. Although TSPO ratio changes varied across individuals, group-level connectivity relationships between regions with high TSPO changes and tau support a connectivity-mediated propagation of tau pathology modulated by microglial activation.



Acknowledgements
This study was supported by the German Center for Neurodegenerative Disorders (Deutsches Zentrum für Neurodegenerative Erkrankungen), Hirnliga (Manfred-Strohscheer Stiftung), and the German Research Foundation (Deutsche Forschungsgemeinschaft) under Germany's Excellence Strategy within the framework of the Munich Cluster for Systems Neurology (EXC 2145 SyNergy, ID 390857198).
References
● Fan, Z., Brooks, D. J., Okello, A., & Edison, P. (2017). An early and late peak in microglial activation in Alzheimer's disease trajectory. Brain, 140(3), 792–803.
● Pascoal, T. A., Benedet, A. L., Ashton, N. J., et al. (2021). Microglial activation and tau propagate jointly across Braak stages. Nature Medicine, 27(9), 1592–1599.
● Rauchmann, B.-S., Brendel, M., Franzmeier, N., et al. (2022). Microglial activation and connectivity in Alzheimer disease and aging. Annals of Neurology, 92(5), 768–781.


Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

20:10 CEST

Party
Tuesday July 8, 2025 20:10 - 22:00 CEST
Tuesday July 8, 2025 20:10 - 22:00 CEST
TBA
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -