Loading…
Sunday July 6, 2025 17:20 - 19:20 CEST
P036 Dendrites competing for weight updates facilitate efficient familiarity detection

Fangxu Cai1,Marcus K. Benna*2


1Department of Physics, UC San Diego, La Jolla, USA


2Department of Neurobiology, UC San Diego, La Jolla, USA


*Email: mbenna@ucsd.edu
Introduction

The dendritic tree of a neuron plays an important role in the nonlinear processing of incoming signals. Previous studies [1-3] have suggested that during learning, selecting only a few dendrites to update their weights can enhance the memory capacity of a neuron by reducing interference between memories. Building on this, we examine two strategies for selecting dendrites: one with and one without interaction between dendrites. The interaction between dendrites serves to reduce variability in the number of dendrites updated, potentially arising from competition and the allocation of resources necessary for long-term synaptic plasticity.

Methods
We study a model with parallel dendrites, each performing nonlinear processing and connected in parallel to the soma, which sums their contributions [4]. The selection of dendrites to update is based on their activation level — the overlap between their weight and input vectors. Under the non-interacting rule, a dendrite is selected if its activation exceeds a specific threshold; under the interacting rule, only the top n dendrites with the highest activations are chosen. We compare these two learning rules using an online familiarity detection task [1]. In this task, input patterns are streamed sequentially to the neuron, which is required to produce a high response to previously presented inputs while maintaining a low response to unfamiliar ones.

Results
We observe that the interacting learning rule achieves a significantly higher memory capacity than the non-interacting rule by 1) limiting the variance of the memory response, and 2) decorrelating synaptic weights when input signals are correlated across dendrites. With the interacting rule, the best achievable memory capacity increases as n decreases, reaching its maximum at n = 1. In contrast, this is not the case for the non-interacting rule, where the capacity declines when too few dendrites are updated. We further find that even when inputs are maximally correlated (all dendrites receive identical input), the interacting rule maintains a capacity comparable to the uncorrelated input scenario.
Discussion
Our findings show that an n-winners-take-all type interaction among dendrites to determine their eligibility for long-term plasticity can better leverage dendritic nonlinearities for optimizing memory capacity, especially when inputs are correlated among dendrites. While biological neurons may not strictly select a fixed number of dendrites to store each input, our model suggests that reducing the variability in the number of updated dendrites through competition between them can still improve the capacity. Furthermore, our results are robust to variations in model specifics, such as the choice of dendritic activation functions and the presence of input noise, underscoring the generality of the proposed mechanism.




Acknowledgements
M.K.B was supported by R01NS125298 (NINDS) and the Kavli Institute for Brain and Mind.

References
1. https://doi.org/10.1371/journal.pcbi.1006892
2. https://doi.org/10.1523/JNEUROSCI.5684-10.2011
3. https://doi.org/10.1038/nature14251
4. https://doi.org/10.1109/JPROC.2014.2312671

Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link