P105 Reciprocity-Controlled Recurrent Neural Networks: Why More Feedback Isn't Always Better
Fatemeh Hadaeghi1*, Kayson Fakhar1,2, Claus C. Hilgetag1,3
●
Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg (UKE), Hamburg University, Hamburg Center of Neuroscience, Germany.
●
MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
●
Department of Health Sciences, Boston University, Boston, MA, USA.
*Email: f.hadaeghi@uke.de
Introduction
Cortical architectures are hierarchically organized and richly reciprocal; yet, their connections exhibit microstructural and functional asymmetries: forward connections are primarily driving, backward connections driving and modulatory, with both showing laminar specificity. Despite this reciprocity, theoretical and experimental studies highlight a systematic avoidance of strong directed loops — an organizational principle captured by the no-strong-loop hypothesis — especially in sensory systems [1]. While such an organization may primarily prevent runaway excitation and maintain stability, its role in neural computation remains unclear. Here, we show that reciprocity fundamentally limits the computational capacity of recurrent neural networks.
Methods
We recently introduced efficient Network Reciprocity Control (NRC) algorithms designed to steer asymmetry and reciprocity in binary and weighted networks while preserving key structural properties [2]. In this work, we apply these algorithms to modulate reciprocity in recurrent neural networks (RNNs) within the reservoir computing (RC) framework [3]. We explore both binary and weighted connectivity in the reservoir layer, spanning random and biologically inspired architectures, including modular and small-world networks. We assess the computational capacity of these models by evaluating memory capacity (MC) and the quality of their internal representations, as measured by the kernel rank (KR) metric [4].
Results
Our results show that increasing feedback — via reciprocity — degrades key computational properties of recurrent neural networks, including memory capacity and representation diversity. Across all experiments, increasing link reciprocity consistently reduced memory capacity and kernel quality, with particularly pronounced and linear declines in sparse networks. When weights, sampled from a log-normal distribution, were assigned to binary networks, stronger weights amplified these reciprocity-driven impairments. Furthermore, enforcing “strength” reciprocity (reciprocity in connection weights) caused an exponential degradation of memory and representation quality. These effects were robust across network sizes and connection densities.
Discussion
Our study explores how structural (link) and weighted (strength) reciprocity limit the computational capacity of recurrent neural networks, explaining the underrepresentation of strong reciprocal connections in cortical circuits. Across various network architectures, we show that increasing reciprocity reduces memory capacity and kernel rank, both of which are essential for complex dynamics and internal representations. This effect persists, and often worsens, for log-normal weight heterogeneities. While higher weight variability boosts performance, it does not mitigate reciprocity’s effects. Beyond neuroscience, our findings influence the initialization and training of artificial RNNs, and the design of neuromorphic architectures.
Acknowledgements
Funding of this work is gratefully acknowledged: F.H: DFG TRR169-A2, K.F: German Research Foundation (DFG)-SFB 936-178316478-A1; TRR169-A2; SPP 2041/GO 2888/2-2 and the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510. C.H: SFB 936-178316478-A1; TRR169-A2; SFB 1461/A4; SPP 1212 2041/HI 1286/7-1, the Human Brain Project, EU (SGA2, SGA3).
References
●
https://doi.org/10.1038/34584
●
https://doi.org/10.1101/2024.11.24.625064
●
https://doi.org/10.1126/science.1091277
●
https://doi.org/10.1016/j.neunet.2007.04.017