Loading…
Sunday July 6, 2025 17:20 - 19:20 CEST
P011 A method to assess individual photoreceptor contributions to cortical computations driving visual perception in mice

David D. Au1, Joshua B. Melander2, Javier C. Weddington2, and Stephen A. Baccus1
1Department of Neurobiology, Stanford University, Palo Alto, USA

2Neurosciences PhD Program, Stanford University, Palo Alto, USA
Email: dau2@stanford.edu
Introduction

Vision is one of our most important sensory systems that drives our evolution and adaptation to survive in different environments. Studies on the visual system have focused on how rod and cone inputs encode simple, artificial visual stimuli in the retina and primary visual cortex (V1). Yet, complex retinal and cortical visual computations that encode natural scenes are contributed by multiplexed photoreceptors, including melanopsin-expressing intrinsically photosensitive ganglion cells [1–2], which have poorly understood effects. Thus, understanding how melanopsin responses converge with other inputs under natural scenes is useful for understanding how visual inputs encode and decode in the early visual system with ethological relevance.


Methods
We record melanopsin-specific responses in V1 usingin vivoneuropixels on head-fixed mice viewing natural scenes, modified to achieve photoreceptor silent substitution. This method isolates melanopsin activation by spectrum-selective manipulation of a photoreceptor (melanopsin) while controlling the activation of others (s-, m-cones). A low melanopsin condition (M-) removes the color component vector projected on the melanopsin spectral tuning curve in each pixel, and a melanopsin only condition (M*) removes or reduces the component along s-, m-cones. Stimuli are presented at light levels between low (8x1012photons/cm2/s) and high (8x1014photons/cm2/s) conditions. We assume these intensity conditions to saturate rods.

Results
We find that mouse V1 responses to natural scenes stimuli are complex and vary widely across laminar structures, suggesting specific neuronal subpopulations that modulate computations to distinct visual features. These responses are, however, from a combination of photoreceptor inputs that we are attempting to understand how individual photoreceptors contribute to visual encoding and decoding. Our implementation ofin vivoneuropixel electrophysiology with a natural virtual reality recording environment and photoreceptor silent substitution rendered stimuli show distinct neural responses that we think are contributed by melanopsin activation. Silencing melanopsin activation also shows activity differences in V1 under natural scenes.

Discussion
Our preliminary results indicate melanopsin activation contributing to complex computations that encode and decode complex natural scenes stimuli in mouse V1. Computational models on these responses also indicate specialized neurons tuned to unique visual features, like locomotion and color. However, additional experiments and deeper analyses are required to probe this phenomenon. Using electrophysiology and cutting-edge computational modeling, this work helps establish how multiplexed inputs that depart from the classical image forming system improve image representation and stimulus discriminability under natural visual scenes.





Acknowledgements
This work was supported by grants from the National institute of Health’s National Eye Institute (NEI), R01EY022933, R01EY025087, P30EY026877 (awarded to SAB), F32EY036275, and a private Stanford fellowship 1246913-100-AABKS (awarded to DDA).
References
1. Allen AE & Lucas RJ. (2014). Melanopsin-Driven Light Adaptation in Mouse Vision.Curr Biol.24(21):2481–2490.https://doi.org/10.1016/j.cub.2014.09.015
2. Davis KE & Lucas RJ. (2015). Melanopsin-Derived Visual Responses under Light Adapted Conditions in the Mouse dLGN.PLOS ONE.10(3):e0123424.https://doi.org/10.1371/journal.pone.0123424


Speakers
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link