Testing the perceptual equivalence hypothesis in mental rotation of 3D stimuli with visual and tactile input

Experimental brain research(2018)

引用 0|浏览11
暂无评分
摘要
Previous studies on mental rotation (i.e., the ability to imagine objects undergoing rotation; MR) have mainly focused on visual input, with comparatively less information about tactile input. In this study, we examined whether the processes subtending MR of 3D stimuli with both input modalities are perceptually equivalent (i.e., when learning within-modalities is equal to transfers-of-learning between modalities). We compared participants’ performances in two consecutive task sessions either in no-switch conditions (Visual→Visual or Tactile→Tactile) or in switch conditions (Visual→Tactile or Tactile→Visual). Across both task sessions, we observed MR response differences with visual and tactile inputs, as well as difficult transfer-of-learning. In no-switch conditions, participants showed significant improvements on all dependent measures. In switch conditions, however, we only observed significant improvements in response speeds with tactile input (RTs, intercepts, slopes: Visual→Tactile) and close to significant improvement in response accuracy with visual input (Tactile→Visual). Model fit analyses (of the rotation angle effect on RTs) also suggested different specification in learning with tactile and visual input. In “ Session 1 ”, the RTs fitted similarly well to the rotation angles, for both types of perceptual responses. However, in “ Session 2 ”, trend lines in the fitting analyses changed in a stark way, in the switch and tactile no-switch conditions. These results suggest that MR with 3D objects is not necessarily a perceptually equivalent process. Specialization (and priming) in the exploration strategies (i.e., speed-accuracy trade-offs) might, however, be the main factor at play in these results—and not MR differences in and of themselves.
更多
查看译文
关键词
Mental rotation,Vision,Touch,Learning,Transfer-of-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要