Free Moving Gaze-related Electroencephalography in Mobile Virtual Environments

Journal of Vision(2023)

引用 0|浏览5
暂无评分
摘要
In realistic contexts involving natural, unconstrained head and body movement, methods for parsing eye movements and using these segmentations to analyze other modalities of synchronously recorded physiological data are still in their infancy. In this study, we recorded eye and head movements, along with simultaneous electroencephalography (EEG), during a 3D visual oddball task in a virtual environment. Healthy adults evaluated standards and deviants presented to their near or far peripheral field of view either by moving their eyes only (low head movement (HM)) or turning their heads (high HM). Compensatory eye movements – likely related to vestibulo-ocular reflex – were found to accompany high HM, suggesting the possible inadequacy of algorithms based purely on the angular velocity of pupil movement for parsing fixations and saccades during head movement – as visual processing likely begins before compensatory eye movement attenuates. To assess the validity of a velocity-based parsing approach, we compared three approaches to computing fixation-related potentials (FRPs) during either low or high HM – stimulus-locking, gaze-related fixation-locking, and simple gaze-locking. Under low HM conditions, both gaze-related fixation-locking and simple gaze-locking yielded classic oddball effects within the expected time window of the P300. Further, evidence of sensitivity to the relative frequency of standards versus deviants was detectable from about 300 ms before fixation onset. On high HM trials, gaze-related fixation-locking yielded a more robust P300 effect than simple gaze-locking. This outcome reveals that despite uncertainty in fixation onset times during head turns, fixation-locking approaches incorporating gaze information are viable in paradigms involving head movement.
更多
查看译文
关键词
electroencephalography,virtual,gaze-related
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要