P057 Biologically Interpretable Machine Learning Approaches for Analyzing Neural Data
Madelyn Esther C. Cruz*1,2, Daniel B. Forger1,2,3
1Department of Mathematics, University of Michigan, Ann Arbor, MI, USA 2Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA 3Michigan Center for Interdisciplinary and Applied Mathematics, University of Michigan, Ann Arbor, MI, USA
*Email:mccruz@umich.edu Introduction
Deep neural networks (DNNs) often achieve impressive classification performance, but they operate as "black boxes,” making them challenging to interpret [1]. They may also struggle to capture the dynamics of time-series data, such as electroencephalograms (EEGs), because of their indirect handling of temporal information. To address these challenges, we explore the use of Biological Neural Networks (BNNs), machine learning models inspired by the brain’s physiology, on neuronal data. By leveraging biophysical neuron models, BNNs offer better interpretability by closely modeling neural dynamics, providing insights into how biological systems generate complex behavior. Methods This study applies backpropagation to networks of biophysically accurate mathematical neuron models to develop a BNN model. Specifically, we define a BNN architecture using modified versions of the Hodgkin–Huxley model [2] and integrate this within traditional neural network algorithms. These BNNs are then used to classify both EEG and non-EEG signals, generate EEG signals to predict brain states, and analyze EEG neurophysiology through model-derived parameters. We also compare the performance of our BNN architecture to those of traditional neural networks. Results Our BNNs demonstrate strong performance in classifying handwritten digits from the MNIST Digits Dataset, learning faster than traditional neural networks. The same BNN architecture also excels on time-series neuronal datasets, effectively distinguishing EEG recordings and power spectral densities associated with alertness vs. fatigue, varying consciousness levels, and different workloads. Additionally, we trained our BNNs to exhibit different frequencies observed in EEG recordings and found that the variability of synaptic weights and applied currents increased with the target frequency range. Discussion Analyzing gradients from backpropagation in BNNs reveals similarities between their learning mechanisms and Hebbian learning in the brain in terms of how synaptic weights change the loss function and how changing the weights at specific time intervals impact learning. In particular, synaptic weight updates occur only when presynaptic or postsynaptic neurons fire [3]. This results in fewer parameter changes during training compared to DNNs while still capturing temporal dynamics, leading to improved learning efficiency and interpretability. Overall, applying backpropagation to accurate ordinary differential equation models enhances neuronal data classification and interpretability while providing insights into brain learning mechanisms.
Acknowledgements We acknowledge the following funding: ARO MURI W911NF-22-1-0223 to DBF. References ● http://doi.org/10.1038/nature14539 ● https://doi.org/10.1007/s00422-008-0263-8 ● https://doi.org/10.1016/j.neucom.2014.11.022