Loading…
Tuesday July 8, 2025 17:00 - 19:00 CEST
P238 Identifying cortical learning algorithms using brain-machine interfaces

Sofia Pereira da Silva1,2, Denis Alevi1, Friedrich Schuessler*1,3, Henning Sprekeler*1,2,3


1 Modelling of Cognitive Processes, Technische Universität Berlin, Berlin, Germany
2 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
3 Science of Intelligence, Technische Universität Berlin, Berlin, Germany

Email: sofia@bccn-berlin.de
Introduction

By causally mapping neural activity to behavior [1], brain–machine interfaces (BMI) offer a means to study the dynamics of sensorimotor learning. Here, we investigate the neural learning algorithm monkeys use to adapt to a changed output mapping in a center-out reaching task [2]. We exploit that the mapping from neural space (ca. 100 dimensions) to the 2D cursor position is a credit assignment problem [3] that is underconstrained, because changes along a large number of output-null dimensions do not influence the behavioral output. We hypothesized that different, but equally performing learning algorithms can be distinguished by the changes they generate in output-null dimensions.

Methods
We combine computational modeling and data analysis to study the neural algorithms underlying learning in the BMI center—out task. We implement networks with three different learning rules (gradient descent, model-based feedback alignment, and reinforcement learning) and three distinct learning strategies (direct, re–aiming [4], remodeling [5]) in feedforward and recurrent architectures. The models’ initial conditions are constrained using publicly available data from BMI experiments [6, 7,8]. We train the models in cursor space and use linear regression to compare the resulting changes in neural space to the data.


Results
We first verify that all implemented algorithms can learn the task in cursor space. In terms of neural activity, we find that various combinations of rules and architectures lead to changes in different low–dimensional subspaces. For instance, re-aiming is, by definition, constrained to a lower-dimensional subspace, so the neural activity changes across algorithms within this strategy are more similar than those in other strategies. Comparing the changes in neural activity and their subspaces with available data from BMI experiments points to learning as a combination of different algorithms. However, not all variance is explained by the algorithms, indicating additional changes outside the modeled subspaces.

Discussion
Bridging BMI experiments and population dynamics analyses creates a framework to study how learning unfolds in the brain. Our results suggest that monkeys employ a combination of previously suggested strategies to learn BMI tasks, involving both model-based and model-free learning. Future work should explore models with recurrent architectures further to better capture biological dynamics. Moreover, applying methods that describe the learning manifolds and trial-to-trial variability could offer interesting insights for comparing the models and data. Finally, comparing our findings with longitudinal datasets that monitor the learning process over time would be valuable for understanding how the learning dynamics progress.





Acknowledgements

References
1.https://doi.org/10.1016/j.conb.2015.12.005
2.https://proceedings.neurips.cc/paper_files/paper/2022/hash/a6d94c38506f16fb50894a5b555f2c9a-Abstract-Conference.html
3.https://doi.org/10.1371/journal.pcbi.1008621
4.https://doi.org/10.1101/2024.04.18.589952
5.https://doi.org/10.7554/eLife.10015
6.https://doi.org/10.1038/s41593-018-0095-3
7.https://doi.org/10.1038/s41593-021-00822-8
8.https://doi.org/10.7554/eLife.36774
Tuesday July 8, 2025 17:00 - 19:00 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link