Loading…
Sunday July 6, 2025 17:20 - 19:20 CEST
P081 Unsupervised Dynamical Learning in Recurrent Neural Networks

Luca Falorsi*1,2, Maurizo Mattia2, Cristiano Capone2

1PhD program in Mathematics, Sapienza Univ. of Rome, Piazzale Aldo Moro 5, Rome, Italy
2Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanit\`a, Viale Regina Elena 299, Rome, Italy

*Email: luca.falorsi@gmail.com


Introduction
Humans and other animals rapidly adapt their behavior, indicating that the brain can dynamically reconfigure its internal representations in response to changing contexts. We introduce a framework grounded in predictive coding theory [1] that integrates reservoir computing [2] and latent variable models, in which a recurrent neural network learns to reproduce sequences while structuring a latent state-space without direct contextual labels, unlike standard approaches that rely on explicit context vectors [3]. We achieve this by redefining the readout mechanism of an echo state network (ESN) [2] as a latent variable model that adapts via gain modulation to track and reproduce the ongoing in-context sequence.
Methods
An ESN processes sequence examples from a related set of tasks, extracting high-dimensional, nonlinear temporal features. In the first learning phase, we train an encoder network, acquiring a low-dimensional latent space from reservoir activity elicited by varying inputs. Synaptic weights W are optimized offline to map reservoir responses into the latent space. One simple and effective solution is to use principal component analysis (PCA).
When presented with a novel sequence associated to a new context, the latent projections are linearly recombined using gain variables g. These gain variables represent latent features of the current context, dynamically adapting to minimize the (time-discounted) prediction error.
Results
We evaluate our architecture on datasets of periodic trajectories, including testing its ability to trace triangles with different orientations (Fig. 1). The encoder is trained offline using PCA on three predefined orientations and tested on previously unseen ones. Our results show that the network generalizes well across the task family, accurately reproducing unseen sequences. When presented with a novel sequence, the readout dynamically adapts in-context, adjusting gain parameters to optimally recombine the principal components based on prediction error feedback (nudging phase). After the gain parameters stabilize, feedback is gradually removed, and the network autonomously reproduces the sequence (closed-loop phase).
Discussion
The proposed framework decomposes the readout mechanism in a recurrent neural network into fixed synaptic components shared across a task family and a dynamic component that adapts in response to contextual feedback. During online adaptation, the network behaves as a gain-modulated reservoir, where gain variables adjust in response to prediction errors [4]. This aligns with biological evidence that top-down dendritic inputs modulate neuronal gain, shaping context-dependent responses [5]. Our approach offers insights into motor control, suggesting that gain modulation enables the flexible recombination of movement primitives [6]—akin to muscle synergies, which organize motor behaviors through structured activation patterns [7].



Figure 1. Figure 1: A Trajectory of network output during the dynamical adaptation phase on novel trajectories. B Principal components (PC) of the learned gain parameters g. The architecture infers the underlying latent task geometry, correctly representing the 120° rotation symmetry. C Mean square reconstruction error (MSE) for closed loop phase. Dashed lines represent standard deviation over 10 trials.
Acknowledgements
LF aknowledges support by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing and Sapienza University of Rome (AR12419078A2D6F9).
MM and CC acknowledge support from the Italian National Recovery and Resilience Plan (PNRR), M4C2, funded by the European Union–NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS-Italy”)
References
1.https://doi.org/10.1098/rstb.2008.0300

2.https://doi.org/10.1126/science.1091277

3.https://doi.org/10.1103/PhysRevLett.125.088103


4.https://doi.org/10.48550/arXiv.2404.07150

5.https://doi.org/10.1093/cercor/bhh065

6.https://doi.org/10.1038/s41593-018-0276-0

7.https://doi.org/10.1038/nn1010
Sunday July 6, 2025 17:20 - 19:20 CEST
Passi Perduti

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link