Loading…
Monday July 7, 2025 16:20 - 18:20 CEST
P223 Auto-adjoint method for Spiking Neural Networks

Thomas Nowotny*1, James C. Knight1

1Department of Neuroscience, Example University, City, Country

*Email: t.nowotny@sussex.ac.uk


Introduction
It is important for the success of neuromorphic computing and computational neuroscience to be able to efficiently train spiking neural networks (SNNs). In 2021, Wunderlich and Pehle published the Eventprop algorithm [1], which is based on the adjoint method for hybrid continuous- discrete systems [2]. Eventprop casts the backward pass, which calculates the gradient of a loss function over an SNN, into a hybrid continuous- discrete system of the same nature as the forward dynamics of the SNN. Therefore, Eventprop can be implemented efficiently on both existing SNN software simulators [3] and digital neuromorphic hardware [4].


Methods

Here, we present new work in which we take Eventprop to the next level. The original Eventprop algorithm [1] was derived explicitly for the case of leaky integrate-and-fire (LIF) neurons and “exponential” synapses. The adjoint method for hybrid systems is much more general [2], and [5] already presents a more general set of equations. Here, we choose a level of generality that allows us to derive the general adjoint equations in a form that is explicit enough that the sympy symbolic math Python package can automatically generate code to simulate the equations.

Results
We assume that the neurons of the SNN being trained have internal dynamics described by ordinary differential equations and that their spiking condition and their reset behaviour are described by functions of the neurons’ variables. Finally, we assume that the action caused by an incoming spike entails adding to a neuron variable. Under these general assumptions, we derived a backward pass for adjoint variables like in the original Eventprop and implemented it into our mlGeNN spike-based machine learning framework [6] using sympy. We observe that for leaky integrate-and-fire neurons and exponential synapses, the new framework has the same performance on popular benchmarks as the previous version of standard Eventprop.


Discussion

We have created a new version of mlGeNN that, based on the generalised Eventprop method presented here, allows researchers to rapidly train SNNs with virtually any neuron dynamics using gradient descent with exact gradients. This includes more complex dynamics, such as Hodgkin-Huxley conductance-based models, opening new avenues for injecting function into computational neuroscience models. This new capability is akin to the auto-diff functionality of PyTorch, which has been instrumental in the recent AI revolution.



Acknowledgements
This work was partially funded by the EPSRC, grants EP/V052241/1 and EP/S030964/1.
References
[1] Wunderlich, T. C., & Pehle, C. (2021). Scientific Reports, 11(1), 12829.
[2] Galán, S., Feehery, W. F., & Barton, P. I. (1999). Appl. Num. Math., 31(1), 17-47.
[3] Nowotny, T., Turner, J. P., & Knight, J. C. (2025). Neurom. Comput. Eng., 5(1), 014001.
[4] Gabriel, B., Timo, W., Mahmoud, A., Bernhard, V., Christian, M., & Hector, A. G. (2024). arXiv preprint arXiv:2412.15021.
[5] Pehle, C. G. (2021). Adjoint equations of spiking neural networks (Doctoral dissertation).
[6] Turner, J. P., Knight, J. C., Subramanian, A., & Nowotny, T. (2022). Neurom. Comput. Eng., 2(2), 024002.

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link