Adaptive Memory Replay for Continual Learning
arxiv(2024)
摘要
Foundation Models (FMs) have become the hallmark of modern AI, however, these
models are trained on massive data, leading to financially expensive training.
Updating FMs as new data becomes available is important, however, can lead to
`catastrophic forgetting', where models underperform on tasks related to data
sub-populations observed too long ago. This continual learning (CL) phenomenon
has been extensively studied, but primarily in a setting where only a small
amount of past data can be stored. We advocate for the paradigm where memory is
abundant, allowing us to keep all previous data, but computational resources
are limited. In this setting, traditional replay-based CL approaches are
outperformed by a simple baseline which replays past data selected uniformly at
random, indicating that this setting necessitates a new approach. We address
this by introducing a framework of adaptive memory replay for continual
learning, where sampling of past data is phrased as a multi-armed bandit
problem. We utilize Bolzmann sampling to derive a method which dynamically
selects past data for training conditioned on the current task, assuming full
data access and emphasizing training efficiency. Through extensive evaluations
on both vision and language pre-training tasks, we demonstrate the
effectiveness of our approach, which maintains high performance while reducing
forgetting by up to 10
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要