P018 Orientation Bias and Abstraction in Working Memory: Evidence from Vision Models and Behaviour
Fabio Bauer*¹, Or Yizhar¹,², Bernhard Spitzer¹,² ¹Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany ²Technische Universität Dresden, Dresden, Germany*bauer@mpib-berlin.mpg.de Introduction
Working memory (WM) for visual orientations shows behavioral bias, where remembered orientations are repelled from the cardinal axes. These canonical biases are well-documented for grating stimuli within 180° space[1-4]. WM maintenance of orientation information has been shown to involvelower-level visual processing [5-9]. However, in recent work we showed that orientation biases are also found with real-world objects in 360° space, which points to a high level of abstraction[10]. Can such abstraction and bias be explained by visual processing alone? Here, we examine if orientation biases for real-world objects emerge in computer vision models of the ventral visual stream [11,12] and compare them with behavioral reports in a WM task. Methods We compared activations from a range of neural network models: brain-inspired CNNs[13], established feedforward CNNs[14-16], and vision-transformers[17, 18]. Each model was shown 144 natural objects with a different principle axis (not rotationally symmetric), rotated in 16 orientations spanning 360°. We used representational similarity analysis to compare the models’ layer activations to idealized representations of bias in 180° and 360° orientation space. Results were compared with human behavioral reports from orientation WM tasks with the same kind of stimuli. Results Neural networks showed orientation biases in 180° space, which became stronger in deeper layers that have been suggested to model higher visual areas. In contrast, when analyzing the full 360° orientation space with natural objects, these same models showed no orientation bias at any layer. This failure across architectures reveals a fundamental limitation: while models can process orientation relationships in simple symmetric stimuli, they fail to recognize that differently shaped objects (like horizontal tables versus vertical towers) can share the same orientation. Our parallel human behavioral experiment showed that, unlike these models, people show orientation biases in working memory across the full 360° spectrum with similar natural objects. Discussion We found no evidence for a biased representation in 360° space in any layers of the vision models we tested. In contrast, human behavioral reports and eye-gaze patterns from WM experiments did show a clear 360° bias. This indicates that bias in our task emerges at the level of an abstracted stimulus feature (the object’s orientation relative to its real-life upright position), rather than low-level visual features. Our findings also suggest that with such real-world objects requiring abstraction, 360° orientation information is not represented in these most current models of visual processing. Future work should focus on validating these exploratory findings experimentally.
Acknowledgements We acknowledge the Max Planck Institute for Human Development for providing computing resources and facilities. We also thank the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research (IMPRS COMP2PSYCH) for funding support. Additionally, we appreciate helpful discussions and comments from Felix Broehl and Ines Pont Sanchis. References