P144 Event-driven eligibility propagation: combining efficiency with biological realism
Agnes Korcsak-Gorzo*1,2, Jesús A. Espinoza Valverde3, Jonas Stapmanns4, Hans Ekkehard Plesser5,1,6, David Dahmen1, Matthias Bolten3, Sacha J. van Albada1,7, Markus Diesmann1,2,8,9
1Institute for Advanced Simulation (IAS-6), Computational and Systems Neuroscience, Forschungszentrum Jülich, Jülich, Germany
2Fakultät 1, RWTH Aachen University, Aachen, Germany
3Department of Mathematics and Science, University of Wuppertal, Wuppertal, Germany
4Department of Physiology, University of Bern, Bern, Switzerland
5Department of Data Science, Faculty of Science and Technology, Norwegian University of Life Sciences, Aas, Norway
6Käte Hamburger Kolleg, RWTH Aachen University, Aachen, Germany
7Institute of Zoology, University of Cologne, Cologne, Germany
8JARA-Institute Brain Structure-Function Relationships (INM-10), Forschungszentrum Jülich, Jülich, Germany
9Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
*Email: a.korcsak-gorzo@fz-juelich.de
Introduction
Understanding the neurobiological computations underlying learning is enhanced by simulations, which serve as a critical bridge between experimental findings and theoretical models. Recently, several biologically plausible learning algorithms have been proposed for simulating spiking recurrent neural networks, achieving performance comparable to backpropagation through time (BPTT) [1]. In this work, we adapt one such learning rule, eligibility propagation (e-prop) [2], to the spiking neural network simulator (NEST) optimized for large-scale simulations.
Methods
To improve computational efficiency and enable large-scale simulations, we replace the original time-driven synaptic updates - executed at every time step - with an event-driven approach, where synapses are updated only when activated by a spike. This requires storing the e-prop history between weight updates, and with optimized history management, we significantly reduce computational overhead. Additionally, we replace components inspired by machine learning with biologically plausible mechanisms and extend the model with features such as continuous dynamics, strict locality, sparse connectivity, and approximations that eliminate vanishing terms, further enhancing computational efficiency.
Results
We demonstrate that our event-driven weight update scheme accurately reproduces the behavior of the original time-driven e-prop model (see Fig. 1) while significantly reducing computational costs, particularly in biologically realistic settings with sparse activity. We validate this approach on various biologically motivated regression and classification tasks, including neuromorphic MNIST [3]. Furthermore, we show that learning performance and computational efficiency remain comparable to those of the original model, despite the incorporation of biologically inspired features. Strong and weak scaling experiments confirm the robust scalability of our implementation, supporting networks with up to millions of neurons.
Discussion
By integrating biologically enhanced e-prop plasticity into an established open-source spiking neural network simulator with a broad and active user base, we aim to facilitate large-scale learning experiments. Additionally, this work provides a foundation for implementing other three-factor learning rules from the extensive literature in an event-driven manner. By bridging AI and computational neuroscience, our approach has the potential to enable large-scale AI networks to leverage energy-efficient biological mechanisms.
Figure 1. Implementation of event-driven e-prop demonstrated on a temporal pattern generation task. Learning occurs through updates to input, recurrent, and output synapses. The upper middle plot illustrates the correspondence between the event-driven and time-driven e-prop models.
Acknowledgements
This work was supported by Joint Lab SMBH; HiRSE_PS; NeuroSys (Clusters4Future, BMBF, 03ZU1106CB); EU Horizon 2020 Framework Programme for Research and Innovation (945539, Human Brain Project SGA3) and Europe Programme (101147319, EBRAINS 2.0); computing time on JURECA (JINB33) via JARA Vergabegremium at FZJ; and Käte Hamburger Kolleg: Cultures of Research (c:o/re), RWTH Aachen (BMBF, 01UK2104).
References
1. Werbos, P. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.
2. Bellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R., & Maass, W. (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11(1), 3625.
3. Orchard, G., Jayawant, A., Cohen, G., & Thakor, N. (2015). Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades. Frontiers in Neuroscience, 9, 437.