Is attention required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability
arxiv(2023)
摘要
What is the relationship between model architecture and the ability to
perform in-context learning? In this empirical study, we take the first steps
toward answering this question. We evaluate thirteen model architectures
capable of causal language modeling across a suite of synthetic in-context
learning tasks. These selected architectures represent a broad range of
paradigms, including recurrent and convolution-based neural networks,
transformers, state space model inspired, and other emerging attention
alternatives. We discover that all the considered architectures can perform
in-context learning under a wider range of conditions than previously
documented. Additionally, we observe stark differences in statistical
efficiency and consistency by varying the number of in-context examples and
task difficulty. We also measure each architecture's predisposition towards
in-context learning when presented with the option to memorize rather than
leverage in-context examples. Finally, and somewhat surprisingly, we find that
several attention alternatives are sometimes competitive with or better
in-context learners than transformers. However, no single architecture
demonstrates consistency across all tasks, with performance either plateauing
or declining when confronted with a significantly larger number of in-context
examples than those encountered during gradient-based training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要