Attentional guidance and match decisions rely on different template information during visual search

crossref(2020)

引用 0|浏览1
暂无评分
摘要
When searching for a target object (e.g., a friend at a party), we engage in a continuous “look- identify” cycle in which we use known features (e.g., hair color) to guide attention and eye gaze towards potential targets and then to decide if it is indeed the target. Theories of attention refer to the information about the target in memory as the “target” or “attentional” template and typically characterize it as a single, fixed, source of information. However, this notion is challenged by a recent debate over how the target template is adjusted in response to linearly separable distractors (e.g., all distractors are “yellower” than an orange target). While there is agreement that the target representation is shifted away from distractors, some have argued that the shift is “relational” (Becker, 2010) while others have argued it is “optimal” (Navalpakkam & Itti, 2007; Yu & Geng, 2019). Here, we propose a novel resolution to this debate based on evidence that the initial guidance of attention uses a coarse code based on “relational” information, but subsequent decisions use an “optimal” representation that maximizes target-to-distractor distinctiveness. We suggest that template information differs in precision when guiding sensory selection and when making identity decisions during visual search (Wolfe, 2020a, 2020b).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要