Loading…
Monday July 7, 2025 16:20 - 18:20 CEST
P122 Encoding visual familiarity for navigation in a mushroom body SNN trained on ant-perspective views

Oluwaseyi Oladipupo Jesusanmi1,2, Amany Azevedo Amin2, Paul Graham1,2, Thomas Nowotny2
1Sussex Neuroscience, University of Sussex, Brighton, United Kingdom
2Sussex AI, University of Sussex, Brighton, United Kingdom
*Email: o.jesusanmi@sussex.ac.uk

Introduction

Ants can learn long visually guided routes with limited neural resources, mediated by the mushroom body brain region acting as a familiarity detector[1,2]. In the mushroom body, low dimensional input from sensory regions is projected into a large population of neurons, producing sparse representations of input information. These representations are learned via an anti-Hebbian process, modulated through dopaminergic learning signals. In navigation, the mushroom bodies guide ants to seek similar views to those previously learned on a foraging route. Here, we further investigate the role of mushroom bodies in ants’ visual navigation with a spiking neural network (SNN) model and 1:1 virtual recreations of ant visual experiences.
Methods
We implemented the SNN model in GeNN[3]. It has 320 Visual Projection Neurons (VPNs), 20000 Kenyon Cells (KCs), one Inhibitory Feedback Neuron (IFN) and one Mushroom Body Output Neuron (MBON). We used Deeplabcut to track ant trajectories in behavioural experiments. We used phone camera input into Neural Radiance Field (NeRF) and photogrammetry software for environment reconstruction. We used Isaac Sim and NVIDIA Omniverse to recreate views along ants’ movement trajectories from the perspective of the ants. We trained the SNN and comparator models (perfect memory and infomax[4]) on these recreations. We modelled inference across all traversable areas of the environment to test each model’s ability to encode navigational information.
Results
We produced familiarity landscapes for our SNN mushroom body and comparator models, showing differences between how they encode off-route (unlearned) locations. The mushroom body model produced navigation accuracy comparable to the other models. We found that the mushroom body model activity was able to explain trajectory data in trials where ants reached the target location. We found some views resulting in high familiarity did not appear in the training set. These views have similar image statistics to images in the training set, even if the view is from a different place in the environment. We found that ant trajectory routes with higher rates of oscillation improved learning, “filling-in” more of the familiarity landscapes.
Discussion
How the mushroom body would respond across all locations in a traversable environment is not known and is normally not feasible to study. Neural recording in ants remains difficult, and there are limited methods to have an ant systematically experience an entire experimental arena. We addressed this issue via simulation of biologically plausible neural activity while having exact control of what the model sees. Visual navigation models have been compared with mushroom body models in terms of navigation accuracy, but the familiarity landscape produced by the varied models has not been compared. Our investigation provides insights into how encoding of familiarity differs and leads to accurate navigation between models.



Acknowledgements.
References
● https://doi.org/10.1016/J.CUB.2020.07.013
● https://doi.org/10.1016/J.CUB.2020.06.030
● https://doi.org/10.1038/srep18854
● https://doi.org/10.1162/isal_a_00645


Monday July 7, 2025 16:20 - 18:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link