Decoding Emotion Dimensions Arousal and Valence Elicited on EEG Responses to Videos and Images: A Comparative Evaluation.

BI(2023)

引用 0|浏览3
暂无评分
摘要
This study aims to compare the automatic classification of emotions based on the self-reported level of arousal and valence with the Self-Assessment Manikin (SAM) when subjects were exposed to videos or images. The classification is performed on electroencephalographic (EEG) signals from the DEAP public dataset, and a dataset collected at the University of Tsukuba, Japan. The experiments were defined to classify low versus high arousal/valence using a Convolutional Neural Network (CNN). The obtained results show a higher performance when the subjects were exposed to videos, i.e., using DEAP dataset we obtained an area under the receiver operating characteristic (AUROC) of 0.844 ± 0.008 and 0.836 ± 0.009 to classify low versus high arousal/valence, respectively. In contrast, when subjects were stimulated with images, the obtained performance was 0.621 ± 0.007 for both, arousal and valence classification. The obtained difference was confirmed by testing the experiments using a method based on the Discrete Wavelet Transform (DWT) for feature extraction and classification using random forest. Using image-based stimulation may help to better understand low and high arousal/valence when analyzing event-related potentials (ERP), however, according to the obtained results, for classification purposes, the performance is higher using video-based stimulation.
更多
查看译文
关键词
emotion dimensions arousal,eeg responses,valence elicited,evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要