Loading…
Tuesday July 8, 2025 17:00 - 19:00 CEST
P248 The Neuron as a Self-Supervised Rectified Spectral Unit (ReSU)

Shanshan Qin*1, Joshua Pughe-Sanford1, Alex Genkin1,Pembe Gizem Özdil2, Philip Greengard1, Dmitri B. Chklovskii*1,3

1Center for Computational Neuroscience, Flatiron Institute, New York City, United States
2EDRS Doctoral Program, Swiss Federal Institute of Technology Lausanne(EPFL), Lausanne, Switzerland
3Neuroscience Institute, New York University Grossman School of Medicine, New York City, United States

*Email: qinss.pku@gmail.com; mitya@flatironinstitute.org


Introduction
Advances in synapse-level connectomics [1, 2, 3] and neuronal population activity imaging [4] necessitate neuronal models capable of integrating effectively with these rich datasets. Ideally, these models should offer greater biological realism and interpretability than rectified linear unit (ReLU) networks trained via error backpropagation [5], while avoiding the complexity and parameter intensity of detailed biophysical models[6]. Here, we propose a self-supervised multi-layer neuronal network employing identical learning rules across layers, progressively capturing more complex and abstract features, similar to the Drosophila visual system for which both neuronal responses and connectomics data are available.


Methods
We introduce the Rectified Spectral Unit (ReSU), a neuron model that rectifies projections of its input onto a singular vector of the whitened covariance matrix between past and future inputs. Representing singular vectors corresponding to the largest singular values in each layer effectively maximizes predictive informa- tion [7, 8]. We construct a two-layer ReSU network trained self-supervisedly on translating natural scenes. Inspired by the Drosophila visual system, each first-layer neuron receives input exclusively from one pixel [1], while second-layer neurons integrate inputs potentially from the entire first-layer population. Post-training, we compare the network’s neuronal responses and synaptic weights with empirical results.
Results
First-layer ReSU neurons learned temporal filters closely matching responses observed in Drosophila visual neurons: specifically, the first singular vector matched the linear filter of the L3 neuron, while the second singular vector corresponded to linear filters of L1 (ON) and L2 (OFF) neurons [9]. Additionally, these learned filters adapted their shapes according to signal-to-noise ratios, consistent with experimental find- ings [10]. Second-layer ReSUs aligned with the second singular vector developed motion-selective responses analogous to Drosophila T4 cells [11], and the synaptic weights learned by these neurons closely resembled those documented in T4 connectomic data [2] (Fig.1).
Discussion
ReSU networks exhibit significant advantages, including simplicity, robustness, interpretability, and biolog- ical plausibility. Rectification within ReSUs functions as a form of dynamic clustering, enabling transitions between distinct linear dynamical regimes. Our findings indicate that self-supervised multi-layer ReSU net- works trained on natural scenes faithfully reproduce critical aspects of biological sensory processing. Conse- quently, our model provides a promising foundation for large-scale, interpretable simulations of hierarchical sensory processing in biological brains.



Figure 1. (a)Neurons learn to predict future input and output the (rectified) latent variable. (b)The fly ON motion detection pathway. (c)Responses of neurons to stepped luminance stimulus. (d)T4 response to a moving grating. (e)Temporal filters adaptation to the input SNR. (f)Spatial filter obtained by SVD of L1-L3 output approximates the weights of synapses impinging onto T4a in Drosophila (b).
Acknowledgements
We thank Charles Epstein, Anirvan M. Sengupta and Jason Moore for helpful discussion.
References
[1]https://doi.org/10.1016/j.cub.2013.12.012
[2]https://doi.org/10.7554/eLife.24394
[3]https://doi.org/10.1016/j.cub.2023.09.021
[4]https://doi.org/10.7554/eLife.38173
[5]https://doi.org/10.1038/s41586-024-07939-3
[6]https://doi.org/10.1371/journal.pcbi.1006240
[7]https://doi.org/10.1109/ISIT.2006.261867
[8]Chechik, D., Globerson, A., Tishby, N., & Weiss,Y.(2005).Information Bottleneck for Gaussian Variables. Journal of Machine Learning Research, 6, 165–188
[9]https://doi.org/10.7554/eLife.74937
[10]https://doi.org/10.1098/rspb.1982.0085
[11]https://doi.org/10.1146/annurev-neuro-080422-111929
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link