Loading…
Sunday July 6, 2025 17:20 - 19:20 CEST
P065 Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics

Michael Deistler*1,2, Kyra L. Kadhim2,3, Matthijs Pals1,2, Jonas Beck2,3, Ziwei Huang2,3, Manuel Gloeckler1,2, Janne K. Lappalainen1,2, Cornelius Schröder1,2, Philipp Berens2,3, Pedro J. Gonçalves1,2,4,5, Jakob H. Macke*1,2,6

1Machine Learning in Science, University of Tübingen, Germany
2Tübingen AI Center, Tübingen, Germany
3Hertie Institute for AI in Brain Health, University of Tübingen, Tübingen, Germany
4VIB-Neuroelectronics Research Flanders (NERF)
5imec, Belgium
6Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany

*Email:michael.deistler@uni-tuebingen.de, jakob.macke@uni-tuebingen.de
Introduction

Biophysical neuron models provide mechanistic insight underlying empirically observed phenomena. However, optimizing the parameters of biophysical simulations is notoriously difficult, preventing the fitting of these models to physiologically meaningful tasks or datasets. Indeed, current fitting methods for biophysical models are typically limited to a few dozen parameters [1]. At the same time, backpropagation of error (backprop) has enabled deep neural networks to scale to millions of parameters and large datasets. Unfortunately, no current toolbox for biophysical simulation can perform backprop [2], limiting any study of whether backprop could also be used to construct and train large-scale biophysical neuron models.


Methods
We built a new simulation toolbox, Jaxley, which overcomes previous limitations in constructing and fitting biophysical models. Jaxley implements numerical solvers required for biophysical simulations in the machine learning library JAX. Thanks to this, Jaxley can simulate biophysical neuron models and it can compute the gradient of such simulations with backpropagation of error (Fig. 1a). This makes it possible to optimize thousands of parameters of biophysical models with gradient descent. In addition, Jaxley can parallelize simulations on GPUs, which speeds up simulation by at least two orders of magnitude (Fig. 1b).


Results
We applied Jaxley to a range of datasets and models. First, we applied Jaxley to a series of single neuron tasks and found that it outperforms gradient-free optimization methods (Fig. 1c). Next, we built a simplified biophysical model of the retina (Fig. 1d). We optimized synaptic and channel conductances on dendritic calcium recordings and found that the trained model exhibits compartmentalized responses (matching experimental recordings [3]). Third, we built a recurrent neural network model with biophysically-detailed neurons and trained this network on working memory tasks. Finally, we trained a network of morphologically detailed neurons to solve MNIST with 100k biophysical parameters (Fig. 1e).


Discussion
Optimizing parameters of biophysically detailed models is challenging, and previous (gradient-free) methods have been limited to a few dozen parameters. We developed Jaxley, which overcomes these limitations. Jaxley implements numerical solvers required for biophysical simulations [4], it can easily parallelize simulations on GPUs, and it can perform backprop. Together, these features make it possible to construct and optimize large neural systems with thousands of parameters. We designed Jaxley to be easy to use and we provide extensive documentation, which will make it easy for the community to adopt the toolbox. Jaxley bridges systems neuroscience and biophysics and will enable new insights and opportunities for multiscale neuroscience.





Figure 1. (a) Jaxley can compute gradients with backprop. (b) Jaxley is as accurate as the NEURON simulator and can achieve speed-ups with GPU parallelization. (c) Jaxley can identify single-neuron models, sometimes much more efficient than a genetic algorithm. (d) Biophysical model of mouse retina predicts dendritic calcium response. (e) Biophysical network solves MNIST computer vision task.
Acknowledgements
This work was supported by the German Research Foundation (DFG) through Germany’s Excellence Strategy (EXC 2064 – PN 390727645) and the CRC 1233 "Robust Vision", the German Federal Ministry of Education and Research (FKZ: 01IS18039A), the 'Certification and Foundations of Safe Machine Learning Systems in Healthcare' project, and the European Union (ref. 101089288, ref. 101039115).

References
[1] Van Geit, W., De Schutter, E., & Achard, P. (2008). Automated neuron model optimization techniques: a review.Biological cybernetics,99, 241-251.
[2]Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment.Neural computation,9(6), 1179-1209
[3]Ran, Y., Huang, Z., Baden, T., Schubert, T., Baayen, H., Berens, P., ... & Euler, T. (2020). Type-specific dendritic integration in mouse retinal ganglion cells.Nature communications,11(1), 2101.
[4]Hines, M. (1984). Efficient computation of branched nerve equations.International journal of bio-medical computing,15(1), 69-76.
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link