Concurrent Multimodal Data Acquisition During Brain Scanning is within Reach

biorxiv(2021)

引用 1|浏览3
暂无评分
摘要
Background Previous brain-scanning research exploring the neural mechanisms underpinning visuomotor planning and control has mostly been done without simultaneous motion-tracking and eye-tracking. Employing concurrent methodologies would enhance understanding of the brain mechanisms underlying visuomotor integration of cognitive, visual, ocular, and motor aspects of reaching and grasping behaviours. Therefore, this work presents the methods and validation for a high-speed, multimodal and synchronized system to holistically examine neural processes that are involved in visually-guided movement. Methods The multimodal methods included high speed 3D motion tracking (Qualisys), 2D eye-tracking (SR Research), and magnetoencephalography (MEG; Elekta) that were synchronized to millisecond precision. Previous MRIs were taken to provide improved spatial localization. The methods section describes the system layout and acquisition parameters to achieve multimodal synchronization. Pilot results presented here are preliminary data from a larger study including 29 participants. Using a pincer grip, five people (3 male, 2 female, ages 30-32) reached for and grasped a translucent dowel 50 times, after it was pseudorandomly illuminated. The object illumination was the Go cue. Seven discrete time points (events) throughout the task were chosen for investigation of simultaneous brain, hand and eye activity associated with specific visual (Go cue), oculomotor (1st saccade after Go), motor (Reaction Time; RT, Maximum Velocity: MV, Maximum Grip Width; MGW) or cognitive (Ready, End) mechanisms. Time-frequency analyses were performed on the MEG data sourced from the left precentral gyrus to explore task-related changes time-locked to these chosen events. Pilot results Basic kinematic parameters including RT, MV, MGW, Movement Time, and Total Time were similar to previous, seminal research by [Castiello, Paulignan and Jeannerod, (1991)][1], using a similar task. Although no gaze instructions were given, eye-tracking results indicated volunteers mostly gazed at or near the target object when Ready (72%), and then hardly looked away throughout the rest of the task at the important events sampled here (92% - 98%). At the End event, when lifting the dowel, on average, participants gazed at or near the target object 100% of the time. Although saccades > 100 ms after Go, but prior to RT were made on average in about one fourth ( M = 13, SD = 6) of trials, a mixed model (REML) indicated their latency in timing after the Go was significantly ( F = 13.376, p = .001) associated with RT scores on those trials ( AIC = 724, R m 2 = 0.407, R c 2= 0.420). Neural activity relative to baseline in the beta band was desynchronized for the visually guided reach periods, beginning prior to Go, and remaining sustained until beyond End, after the grasp and lift were executed. Conclusion This study presents the layout, acquisition parameters and validation for a multimodal, synchronized system designed to record data from the hand, eye and brain simultaneously, with millisecond precision during an ecologically-valid prehension task with physical, 3D objects. The pilot results align with previous research made with single or bimodal data recordings. This multimodal method enables full-brain modelling that can holistically map the precise location and timing of neural activity involved in the visual, oculomotor, motor and cognitive aspects of reach-to-grasp planning and control. ### Competing Interest Statement The authors have declared no competing interest. [1]: #ref-19
更多
查看译文
关键词
concurrent multimodal data acquisition,brain scanning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要