P207 From evolution to creation of spiking neural networks using graph-based rules
Yaqoob Muhammad1, Emil Dmitruk1, Volker Steuber1, Shabnam Kadir*1
1Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom
*Email: s.kadir2@herts.ac.uk Introduction
Predicting dynamics from synaptic weights of a neural network and vice versa is a difficult problem. There has been some success with Hopfield networks [1] and combinatorial threshold linear networks, which is a rate model [2], however not, to our knowledge, with spiking neural networks (SNNs). Usually, weights are obtained by a process of training, e.g. using STDP, surrogate gradient descent, evolutionary algorithms, etc.
In contrast, here we look at small spiking neural networks as initiated in [3] and formulate rules for the direct selection of both network topology and synaptic weights. This can reduce or eliminate the need for training or using genetic algorithms to derive weights. Methods We illustrate our approach on networks of minimal size consisting of Adaptive Exponential Integrate and Fire (AdEx) [6] neurons where the aim is for the network to recognise an input pattern of lengthkconsisting of distinct letters, following on from the work on [3]. The network must only accept this single pattern out ofkkpossible patterns. The network haskinterneurons and one output neuron. In our initial experiments [4] a genetic algorithm was used to evolve both the topology and connection weights of SNNs encoded as linear genomes [5]. In [4] fork = 3, out of 100 independent evolutionary runs for 1000 generations each, 33 runs yielded perfect recognizers for a pattern of three signals. Results Fork > 6we used patterned matrices of the form seen in Figure 1. We have a tree with components consisting of leaves comprised of several nodes (matrix entries) which are all positive or negative, and a few key components that must take either maximal or minimal weights. Using our new method we obtained networks that performed perfectly for up tok = 10(at the time of submission of this abstract). There appears to be no obstructions to the approach working for arbitrarily largek. With randomly chosen weights andk = 6, evolution using a genetic algorithm took 500 gen- erations before a perfect recogniser was found. In contrast, our approach using both handcrafted topologies and weights required none or far fewer generations Discussion Our results are still very much conjectures based on observation, but they indicate that for SNNs there may be graph-based rules relating synaptic weights to function. The weights exhibit a relationship that is highly deterministic. Unlike in previous approaches, we do not require any restrictions on the form of the connectivity matrix, e.g. we do not need it to be symmetric as is required for stable fixed points of Hopfield networks, and we allow both excitatory and inhibitory connections, as well as autapses. This is a first step towards developing a theory for modularity for SNNs, i.e. enabling the glueing of such networks whilst preserving properties, analogous to what was achieved for a variety of attractor types for CTLNs [2].
Figure 1. A) Sample connectivity matrix pattern for k = 10. The weights in the connectivity matrix have been ordered, with negative and positive weights being given by negative and positive integers respectively. B) Weights distribution (indexed by ordering from -25 to 30) in 10 matrices recognising the same pattern of length 10. C) Network activity for a sequence ABCDEFGHIJ - 10 interneurons and 1 output n Acknowledgements This research has received no external funding. References [1] https://doi.org/10.1073/pnas.79.8.2554 [2] https://doi.org/10.1137/22M1541666 [3] https://doi.org/10.1101/2023.11.16.567361 [4] https://doi.org/10.1162/isal_a_00121 [5] https://doi.org/10.1007/978-3-319-06944-9_10 [6] https://doi.org/10.1152/jn.00686.2005