Loading…
Sunday July 6, 2025 17:20 - 19:20 CEST
P015 Dynamic Causal Modelling in Probabilistic Programming Languages

Nina Baldy1*, Marmaduke Woodman1, Viktor Jirsa1, Meysam Hashemi1


1Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France

*Email: nina.baldy@univ-amu.fr
Introduction

Dynamic Causal Modeling (DCM) [1] is a key methodology in neuroimaging for understanding the intricate dynamics of brain activities. It imposes a statistical framework that embraces causal relationships among brain regions and their responses to experimental manipulations, such as stimulation. In this work, we perform Bayesian inference on a neurobiologically plausible model that simulates event-related potentials observed in magneto/encephalography data [2]. This translates into probabilistic inference of latent and observed states of a system described by a set of nonlinear ordinary differential equations (ODEs) and potentially correlated parameters.
Methods
Central to DCM is Bayesian model inversion, which aims to infer the posterior distribution of model parameters given the prior and observed data. Variational inference translates this into an optimization problem by approximating the posterior with a fixed-form density [3]. We consider three Gaussian approximations: the mean-field which neglects correlation between parameters, its full-rank counterpart, and the analytical Laplace. We benchmark them against state-of-the art Markov Chain Monte Carlo (MCMC): the No-U-Turn-Sampler [4]. Finally, we benchmark the efficiency of each method as implemented in several Probabilistic Programming Languages (PPLs) [5] in terms of effective sample per computational unit.

Results
Our investigation shows that model inversion in DCM extends beyond variational approximation frameworks, demonstrating the effectiveness of gradient-based MCMC. We observe close alignment between MCMC NUTS and full-rank variational in terms of posterior distributions and model comparison. Our results demonstrate significant improvements in the effective sample size per computational time unit, with PPLs showing advantages over traditional implementations. Additionally, we propose solutions to mitigate issues related to multi-modality in posterior distributions, such as initializing at the tail of the prior distribution, and weighted stacking [6] of chains for improved inference.

Discussion
Previous research on MCMC methods for Bayesian model inversion in DCM highlighted challenges with both gradient-free and gradient-based approaches [7, 8]. However, we found that the ability to combine probabilistic modeling with high-performance computational tools offers a promising solution to the challenges of high-dimensional, non-linear models in DCM. Future work should extend to whole-brain models and fMRI data, which pose additional challenges for both MCMC and variational methods.





Acknowledgements
This research has received funding from EU’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 101147319 (EBRAINS 2.0 Project), No. 101137289 (Virtual Brain Twin Project), and government grant managed by the Agence Nationale de la Recherche reference ANR-22-PESN-0012 (France 2030 program).

References
[1]https://doi.org/10.1016/S1053-8119(03)00202-7
[2]https://doi.org/10.1016/j.neuroimage.2005.10.045
[3]https://doi.org/10.1080/01621459.2017.1285773
[4]https://doi.org/10.48550/arXiv.1111.4246
[5]https://doi.org/10.1145/2593882.2593900
[6]https://doi.org/10.48550/arXiv.2006.12335
[7]https://doi.org/10.1016/j.neuroimage.2015.03.008[8]https://doi.org/10.1016/j.neuroimage.2015.07.043


Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link