Engram Size Varies With Learning And Reflects Memory Content And Precision

JOURNAL OF NEUROSCIENCE(2021)

引用 8|浏览0
暂无评分
摘要
Memories are rarely acquired under ideal conditions, rendering them vulnerable to profound omissions, errors, and ambiguities. Consistent with this, recent work using context fear conditioning has shown that memories formed after inadequate learning time display a variety of maladaptive properties, including overgeneralization to similar contexts. However, the neuronal basis of such poor learning and memory imprecision remains unknown. Using c-fos to track neumnal activity in male mice, we examined how these learning-dependent changes in context fear memory precision are encoded in hippocampal ensembles. We found that the total number of c-fosencoding cells did not correspond with learning history but instead more dosely reflected the length of the session immediately preceding c-fos measurement. However, using a c-fos-driven tagging method (TRAP2 mouse line), we found that the degree of learning and memory specificity corresponded with neuronal activity in a subset of dentate gyms cells that were active during both learning and recall. Comprehensive memories acquired after longer learning intervals were associated with more double-labeled cells. These were preferentially reactivated in the conditioning context compared with a similar context, paralleling behavioral discrimination. Conversely, impoverished memories acquired after shorter learning intervals were associated with fewer double-labeled cells. These were reactivated equally in both contexts, corresponding with overgeneralization. Together, these findings provide two surprising conclusions. First, engram size varies with learning. Second, larger engrams support better neuronal and behavioral discrimination. These findings are incorporated into a model that describes how neuronal activity is influenced by previous learning and present experience, thus driving behavior.
更多
查看译文
关键词
c-fos, context, engram, fear conditioning, hippocampus, memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要