Human pointing errors suggest a flattened, task-dependent representation of space

bioRxiv(2019)

引用 1|浏览36
暂无评分
摘要
People are able to keep track of objects as they navigate through space, even when objects are out of sight. This requires some kind of representation of the scene and of the observer9s location. We tested the accuracy and reliability of observers9 estimates of the visual direction of previously-viewed targets. Participants viewed 4 objects from one location, with binocular vision and small head movements giving information about the 3D locations of the objects. Without any further sight of the targets, participants walked to another location and pointed towards them. All the conditions were tested in an immersive virtual environment and some were also carried out in a real scene. Participants made large, consistent pointing errors that are poorly explained by any single 3D representation. Instead, a flattened representation of space that is dependent on the structure of the environment at the time of pointing provides a good account of participants9 errors. This suggests that the mechanisms for updating visual direction of unseen targets are not based on a stable 3D model of the scene, even a distorted one.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要