Predicting visual memory across images and within individuals

COGNITION(2022)

引用 6|浏览0
暂无评分
摘要
We only remember a fraction of what we see-including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image's memorability and an individual's fluctuating attentional states. Here, we build the first model of memory syn-thesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, n = 706) and attentional state indexed by response time on a continuous per-formance task (Experiments 2 and 3, n = 57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual-and image-specific factors allows for directed forecasting of our memories. Significance statement: Although memory is a fundamental cognitive process, much of the time memory failures cannot be predicted until it is too late. However, in this study, we show that much of memory is surprisingly pre-determined ahead of time, by factors shared across the population and highly specific to each individual. Spe-cifically, we build a new multidimensional model that predicts memory based just on the images a person sees and when they see them. This research synthesizes findings from disparate domains ranging from computer vision, attention, and memory into a predictive model. These findings have resounding implications for domains such as education, business, and marketing, where it is a top priority to predict (and even manipulate) what information people will remember.
更多
查看译文
关键词
Memorability,Sustained attention,Recognition memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要