Loading…
Tuesday July 8, 2025 17:00 - 19:00 CEST
P241 Parameter Estimation in Differentiable Whole Brain Networks: Methodological Explorations and Practical Limitations

Marius Pille* ¹ ², Emilius Richter¹ ², Leon Martin¹ ², Dionysios Perdikis¹ ², Michael Schirner¹ ² ³ ⁴ ⁵, Petra Ritter¹ ² ³ ⁴ ⁵

¹ Berlin Institute of Health (BIH) at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
² Department of Neurology with Experimental Neurology, Charité, Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany
³ Bernstein Focus State Dependencies of Learning and Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany
⁴ Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117, Berlin, Germany
⁵ Einstein Center Digital Future, Wilhelmstraße 67, 10117, Berlin, Germany

*Email: marius.pille@bih-charite.de
Introduction

Connectome-based brain network modelling, facilitated by platforms like The Virtual Brain (TVB), has significantly advanced computational neuroscience by providing a framework to decipher the intricate dynamics of the brain. However, existing techniques for inferring physiological parameters from neuroimaging data, such as functional magnetic resonance imaging, magnetoencephalography and electroencephalography, are often constrained by computational costs, developer effort, and limited available data, blocking translation [1].

Methods

Differentiable Models [2] address these limitations by enabling the application of state-of-the-art parameter estimation techniques from machine learning, particularly the family of stochastic gradient descent optimizers. We reformulated brain network models using highly optimized differentiable libraries, creating generalized, composable building blocks for complex modeling problems. This approach was tested across different types of neural mass models with various neuroimaging data types, to demonstrate advantages and limitations.

Results

Our differentiable framework demonstrates performance improvements of one to two orders of magnitude compared to classical TVB implementations, with the added benefit of easy parallelization across devices like GPUs. By leveraging a computational knowledge base for brain simulation [3], our approach preserves flexibility while accommodating diverse neural mass models. We established documented workflows for the most common modeling problems, building from low to high complexity, to enhance accessibility. Limitations of differentiable models, where the proximity to bifurcation points can lead to unstable gradients, are explored and potential solutions are proposed, drawing from the field of classical neural networks [4].

Discussion

This work aims to contribute to the translation of brain network models from foundational research to clinical applications by addressing existing roadblocks [1]. By creating reusable, composable components rather than specific solutions, we provide a versatile framework that can adapt to diverse research questions. The significant performance improvements enable more complex hypotheses to be tested and potentially bring computational neuroscience tools closer to practical clinical implementation.





Acknowledgements
I would like to express my sincere gratitude to my supervisors for their continuous feedback and valuable advice on this work. Special thanks to Petra Ritter for her guidance and for providing all the necessary resources that made this research possible.
References
[1] Fekonja, L. S. et al. (2025). Translational network neuroscience: Nine roadblocks and possible solutions. Network Neuroscience, 1–19. doi.org/10.1162/netn_a_00435
[2] Sapienza, F. et al. (2024). Differentiable Programming for Differential Equations: A Review. arXiv. arxiv.org/abs/2406.09699
[3] Martin, L. et al. (in preparation). The Virtual Brain Ontology: A computational knowledge space generating reproducible models of brain network dynamics.
[4] Pascanu, R. et al. (2013). On the difficulty of training Recurrent Neural Networks. arXiv. doi.org/10.48550/arXiv.1211.5063
Speakers
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link