Loading…
Monday July 7, 2025 16:20 - 18:20 CEST
P198 Binary Brains: Excitable Dynamics Simplify Neural Connectomes

Arnaud Messé¹, Marc-Thorsten Hütt², Claus C. Hilgetag¹,³*
¹ Institute of Computational Neuroscience, Hamburg Center of Neuroscience, University Medical Center Eppendorf, Hamburg, Germany
² Computational Systems Biology, School of Science, Constructor University, Bremen, Germany
³ Department of Health Sciences, Boston University, Boston, MA, USA
--
*Email: c.hilgetag@uke.de


Introduction
Neural connectomes, representing the structural backbone of brain dynamics, are traditionally analyzed as weighted networks. However, the computational cost and methodological challenges of weighted representations hinder their widespread use. Here, we demonstrate that excitable dynamics—a common mechanism in neural and artificial networks—enable a thresholding approach that renders weighted and binary networks functionally equivalent. This finding simplifies network analyses and supports efficient artificial neural network (ANN) design by drastically reducing memory and computational demands.
Methods
We examined excitable network dynamics using a cellular automaton-based excitable model (SER model) and the FitzHugh-Nagumo model. By mapping the local excitation threshold onto global network weights, we identified a threshold at which binarized networks produce activity patterns statistically indistinguishable from those in weighted networks. Simulations were performed on synthetic networks, empirical structural brain connectivity data (MRI-derived), and artificial neural networks trained on the MNIST dataset. Computational efficiency was assessed in terms of memory usage and execution time.
Results & Discussion
Our findings [1] show that, under appropriate thresholding, binarized networks accurately reproduce coactivation patterns and functional connectivity observed in weighted brain networks. This effect holds across diverse network topologies and weight distributions, particularly for log-normal weight distributions found in empirical data. Computationally, binarized networks require significantly less memory and reduce processing times by orders of magnitude. These findings not only simplify empirical network analyses in neuroscience but also suggest a general principle for optimizing computational models in various domains, including machine learning, complex systems, and bio-inspired AI. Particularly in ANNs, thresholding maintains classification accuracy while drastically lowering the number of parameters, making binary networks a promising approach for efficient AI design.



Acknowledgements
The research was supported by the Deutsche Forschungsgemeinschaft (DFG) - SFB 936 - 178316478 - A1 & Z3, SPP2041 - 313856816 - HI1286/7-1, TRR 169 - 261402652 - A2, and the EU Horizon 2020 Framework Programme (HBP SGA2 & SGA3).
References
[1]https://doi.org/10.1101/2024.06.23.600265

Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link