Loading…
Monday July 7, 2025 16:20 - 18:20 CEST
P218 Using AI Technologies To Boost the Simulation and Personalization of Brain Network Models

Alessandro Fasse1, Chiara Billi*1, Taylor Newton1, Esra Neufeld1

1 Foundation for Research on Information Technologies in Society (IT’IS), Zurich, Switzerland

*Email:billi@zmt.swiss

Introduction

Neural mass models (NMMs) approximate the collective dynamics of neuronal populations through aggregated state variables, facilitating large-scale brain network simulations and the calculation ofin silicoelectrophysiological signals. Despite their utility in exploring emergent network phenomena across brain states, scaling these models to whole-brain simulations imposes steep computational costs. To address this, we leveraged the inherent parallelism of NMM computations and their AI-like computational structure to develop a GPU-accelerated framework based on computation graphs.

Methods
Our framework simulates networks of Jansen-Rit (JR)-type NMMs [1] in both region- and surface-based configurations, while supporting weight matrix-based inter-region connectivity, local coupling definition, and stochastic integration. All JR model state variables are consolidated into a single PyTorch [2] tensor, enabling efficient batch processing across multiple GPUs with mixed-precision (16-, 32-, and 64-bit) support. Structuring the entire computation as a graph enables access to automatic differentiation, allowing gradient tracking throughout the simulation and gradient-based optimization of model parameters.
Results
We compared our framework’s performance to The Virtual Brain (TVB) [3], a widely used library for whole-brain NMM simulations. For modeling 10 seconds of stochastic activity in a 20k-node network, our method completed the task in 20 seconds, compared to 18 minutes with TVB - a ~55-fold speedup. The implementation was verified, e.g., by reproducing results from [4] on sharp transitions in network synchronization (measured by the Kuramoto parameter) as a function of the global coupling coefficient.
Discussion
Our findings highlight the feasibility of large-scale, high-fidelity neural mass simulations with runtimes suitable for online or iterative workflows. By leveraging computation graphs for parallel processing and automatic differentiation, the framework opens avenues for gradient-based parameter fitting and real-time state estimation. The replication of established emergent phenomena supports the model’s validity and suggests broader applications including pathological and adaptive networks. Future work will extend these capabilities to other NMM model types and explore integration with multi-scale brain models.



Acknowledgements
No acknowledgements.
References
● https://doi.org/10.1016/j.neuroimage.2023.119938
● "Automatic differentiation in PyTorch", A. Paska, S. Gross, S. Chintala, G. Chanan, Y. Edward, Z. DeVito, Z. Lin, A. Desmaison, L. Antigua and A. Lerer, 2017
● https://doi.org/10.3389/fninf.2013.00010
● https://doi.org/10.1371/journal.pone.0292910


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link