P235 Beyond the Response: How Post-Response EEG Signals Improve Lie Detection
Hanbeot. Park¹, Hoon-hee. Kim*²
¹ Department of Data Engineering, Pukyong National University, Busan, Korea
²Department ofComputerEngineeringand Artificial Intelligence, Pukyong National University, Busan, Korea
*Email:h2kim@pknu.ac.kr
Introduction
In modern society, lies intentional or not are widespread and impose cognitive burdens and neurophysiological changes. Lying produces psychological tension, extra cognitive processing, and emotional strain, which are reflected in distinct neural activity patterns. While earlier lie detection studies focused on response EEG signals, recent research suggests that response activity capturing further evaluation and lingering tension provides critical information for distinguishing deception from truth.[1]EEG from 12 subjects were recorded during responses and for 15 seconds post-response. Extracted features were classified using a sliding window machine learning approach, with post-response features, enhancing classification performance.
Methods
Using a uniform 64 channel EEG system, this study investigated deception by recording EEG from 12 subjects who answered six questions under lie or truth conditions. Data were recorded during the response period and for 15 seconds post-response. Preprocessing steps included bandpass filtering, notch filtering, artifact removal, average referencing, and downsampling. To capture both local and long-term patterns, Fig1 is a multi-layer model was built by combining SSM based Mamba[2]with the MoE[3]technique. Statistical features and neural features were extracted. EEG data were segmented into 0.5 second windows with a 0.025 second overlap, and question-level cross-validation identified the most informative time interval for lie detection.
Results
The classification model evaluation confirmed that EEG features from various time intervals significantly differentiate lies from truth, as shown by question-level cross-validation. Features from the post-response interval significantly outperformed those from the pre-response interval (P < 0.005), with the effective features achieving a performance improvement of 0.150 ± 0.007. Moreover, intervals covering the entire post-response period yielded the best results. Notably, skewness, kurtosis, zero crossing, and sample entropy effectively capture the non-linear, dynamic EEG changes associated with additional cognitive processingresponses after answering, underscoring their potential as key neurophysiological indicators for lie detection.
Discussion
Using question-level CV, this study confirmed that several statistical and neurophysiological EEG features from the post-response interval significantly enhanced lie detection performance compared to those from the pre-response interval (P < 0.005). These findings suggest that subjects sustain tension and engage in extra cognitive processing after responding, producing distinct neural patterns of deception. Although the small sample size and use of question-level CV may limit generalizability, post-response EEG data provided more stable and reliable neural patterns. Future studies should use subject-level CV and further explore the optimal duration of the post-response interval.
Figure 1. Overall Structure of Lie Detection. This architecture employs data processing and feature extraction, followed by a multi-layer model that leverages Mamba and MoE.
Acknowledgements
This study was supported by the National Police Agency and the Ministry of Science, ICT & Future Planning (2024-SCPO-B-0130), the National Research Foundation of Korea grant funded by the Korea government (RS-2023-00242528), the National Program for Excellence in SW, supervised by the IITP(Institute of Information & communications Technology Planing & Evaluation) in 2025(2024-0-00018)
References
J. Gao et al., “Brain Fingerprinting and Lie Detection: A Study of Dynamic Functional Connectivity Patterns of Deception Using EEG Phase Synchrony Analysis,” IEEE J Biomed Health Inform, vol. 26, no. 2, pp. 600–613, Feb. 2022, doi: 10.1109/JBHI.2021.3095415.
A. Gu and T. Dao, “Mamba: Linear-Time Sequence Modeling with Selective State Spaces,” Dec. 2023, [Online]. Available: http://arxiv.org/abs/2312.00752
N. Shazeer et al., “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,” Jan. 2017, [Online]. Available: http://arxiv.org/abs/1701.06538