Adaptive behavior with stable synapses
arxiv(2024)
摘要
Behavioral changes in animals and humans, as a consequence of an error or a
verbal instruction, can be extremely rapid. Improvement in behavioral
performances are usually associated in machine learning and reinforcement
learning to synaptic plasticity, and, in general, to changes and optimization
of network parameters. However, such rapid changes are not coherent with the
timescales of synaptic plasticity, suggesting that the mechanism responsible
for that could be a dynamical network reconfiguration. In the last few years,
similar capabilities have been observed in transformers, foundational
architecture in the field of machine learning that are widely used in
applications such as natural language and image processing. Transformers are
capable of in-context learning, the ability to adapt and acquire new
information dynamically within the context of the task or environment they are
currently engaged in, without the need for significant changes to their
underlying parameters. Building upon the notion of something unique within
transformers enabling the emergence of this property, we claim that it could
also be supported by input segregation and dendritic amplification, features
extensively observed in biological networks. We propose an architecture
composed of gain-modulated recurrent networks that excels at in-context
learning, showing abilities inaccessible to standard networks. We argue that
such a framework can describe the psychometry of context-dependent tasks on
humans and other species, solving the incoherence of plasticity timescales.
When the context is changed, the network is dynamically reconfigured, and the
predicted output undergoes dynamic updates until it aligns with the information
embedded in the context.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要