ZipIt! Merging Models from Different Tasks without Training
ICLR 2024(2023)
摘要
Typical deep visual recognition models are capable of performing the one task
they were trained on. In this paper, we tackle the extremely difficult problem
of combining distinct models with different initializations, each solving a
separate task, into one multi-task model without any additional training. Prior
work in model merging permutes one model to the space of the other then
averages them together. While this works for models trained on the same task,
we find that this fails to account for the differences in models trained on
disjoint tasks. Thus, we introduce "ZipIt!", a general method for merging two
arbitrary models of the same architecture that incorporates two simple
strategies. First, in order to account for features that aren't shared between
models, we expand the model merging problem to allow for merging features
within each model by defining a general "zip" operation. Second, we add support
for partially zipping the models up until a specified layer, naturally creating
a multi-head model. We find that these two changes combined account for 20-60
improvement over prior work, making it more feasible to merge models trained on
disjoint tasks without retraining.
更多查看译文
关键词
Model Merging,Mode Connectivity,Classification,Deep Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要