Loading…
Saturday July 5, 2025 13:00 - 16:00 CEST
Recent advances in machine learning methods make it possible to train recurrent neural networks (RNNs) to perform highly complex and sophisticated tasks. One of the tasks, particularly interesting to neuroscientists, is to generate experimentally recorded neural activities in recurrent neural networks and study the dynamics of trained networks to investigate the underlying neural mechanism.

Here we showcase how a widely-used training method, known as recursive least squares (or FORCE), can be adopted to train spiking RNNs to reproduce spike recordings of cortical neurons. We first give an overview of the original FORCE learning, which trains the outputs of rate-based RNNs to perform tasks, and show how it can be modified to generate arbitrarily complex activity patterns in spiking RNNs. Using this method, we show only a subset of neurons embedded in a network of randomly connected excitatory and inhibitory spiking neurons can be trained to reproduce cortical neural activities.

References:
  • Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4), 544-557.
  • Kim, C. M., & Chow, C. C. (2018). Learning recurrent dynamics in spiking networks. Elife, 7, e37124.
  • Kim, C. M., & Chow, C. C. (2021). Training spiking neural networks in the strong coupling regime. Neural computation, 33(5), 1199-1233.
  • Kim, C. M., Finkelstein, A., Chow, C. C., Svoboda, K., & Darshan, R. (2023). Distributing task-related neural activity across a cortical network through task-independent connections. Nature Communications, 14(1), 2851.

Speakers
CK

Christopher Kim

Assistant Professor, Howard University
Saturday July 5, 2025 13:00 - 16:00 CEST
Room 101

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link