P225 Time-to-first-spike encoding in layered networks evokes label-specific synfire chain activity
Jonas Oberste-Frielinghaus1,2, Anno C. Kurth1, Julian Göltz3,4, Laura Kriener5,4,Junji Ito*1, Mihai A. Petrovici4, Sonja Grün1,6,7
1Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
2RWTH Aachen University, Aachen, Germany
3Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
4Department of Physiology, University of Bern, Bern, Switzerland
5Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
6JARA Brain Institute I (INM-10), Jülich Research Centre, Jülich, Germany
6Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
*Email: j.ito@fz-juelich.de
Introduction
While artificial neural networks (ANNs) have achieved remarkable success in various tasks, they lack two major characteristic features of biological neural networks: spiking activity and operation in continuous time. This makes it difficult to leverage knowledge about ANNs to gain insights into the computational principles of the real brains. However, training methods for spiking neural networks (SNNs) have recently been developed to create functional SNN models [1]. In this study we analyze the activity of a multilayer feedforward SNN trained for image classification and uncover the structures in both connectivity and dynamics that underlie its functional performance.
Methods
Our network is composed of an input layer (784 neurons), 4 hidden layers (300 excitatory and 100 inhibitory neurons in each layer), and an output layer (10 neurons). We trained it with backpropagation to classify the MNIST dataset, based on time-to-first-spike coding: each neuron encodes information in the timing of its first spike; the first neuron to spike in the output layer defines the inferred input image class [1]. The MNIST input is also provided as spike timing: dark pixels spike early, lighter pixels later. Based on the connection weights after training, neurons that have strong excitatory effects on each of the output neurons are identified in each layer. Note that one neuron can have strong effects on multiple output neurons.
Results
In response to a sample, the input layer generates a volley of spikes, identified as a pulse packet (PP) [2], which propagates through the hidden layers (Fig. 1). In deeper layers, spikes in a PP get more synchronized and the neurons providing spikes to the PP become more specific to the sample label. This leads to a characteristic sparse representation of the sample label in deep layers. The analysis of connection weights reveals that a correct classification is achieved by propagating spikes through a specific pathway across layers, composed of neurons with strong excitatory effects on the correct output neuron. Pathways for different output neurons become more separate in deeper layers, with less overlap of neurons between pathways.
Discussion
The revealed connectivity structure and the propagation of spikes as a PP agree with the notion of the synfire chain (SFC) [3,4]. To our knowledge, this is the first example of SFC formation by training of a functional network. In our network, multiple parallel SFCs emerge through the training for MNIST classification, representing each input label by activation of one particular SFC. Such a representation naturally leads to sparser encoding of the input label in deeper layers, and also increases the linear separability of layer-wise activity. Thus, the use of SFCs for information representation can have multiple advantages for achieving efficient computation, besides the stable transmission of information through the network.
Figure 1. Network activity in response to six different samples. Dots represent spike times of individual neurons, with colors indicating the luminance of the corresponding pixels in the sample (“input” layer), or spikes of excitatory (red) and inhibitory (blue) neurons (layers 1-4). The first neurons to spike in the “output” layer are indicated by numbers next to the spikes.
Acknowledgements
This research was funded by the European Union’s Horizon 2020 Framework programme for Research and Innovation under Specific Grant Agreements No. 785907 (HBP SGA2), No. 945539 (HBP SGA3) and No. 101147319 (EBRAINS 2.0), the NRW-network 'iBehave' (NW21-049), the Helmholtz Joint Lab SMHB, and the Manfred Stärk Foundation.
References
● Göltz et al. (2021). Fast and energy-efficient neuromorphic deep learning with first-spike times. Nature Machine Intelligence, 3(9), 823–835. https://doi.org/10.1038/s42256-021-00388-x
● Diesmann, Gewaltig, & Aertsen (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402(6761), 529–533. https://doi.org/10.1038/990101
● Abeles (1982). Local Cortical Circuits: An Electrophysiological Study. Springer-Verlag.
● Abeles (1991). Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press.