FuseChat: Knowledge Fusion of Chat Models
CoRR(2024)
摘要
While training large language models (LLMs) from scratch can indeed lead to
models with distinct capabilities and strengths, this approach incurs
substantial costs and may lead to potential redundancy in competencies. An
alternative strategy is to combine existing LLMs into a more robust LLM,
thereby diminishing the necessity for expensive pre-training. However, due to
the diverse architectures of LLMs, direct parameter blending proves to be
unfeasible. Recently, FuseLLM introduced the concept of knowledge
fusion to transfer the collective knowledge of multiple structurally varied
LLMs into a target LLM through lightweight continual training. In this report,
we extend the scalability and flexibility of the FuseLLM framework to
realize the fusion of chat LLMs, resulting in FuseChat.
FuseChat comprises two main stages. Firstly, we undertake knowledge
fusion for structurally and scale-varied source LLMs to derive multiple target
LLMs of identical structure and size via lightweight fine-tuning. Then, these
target LLMs are merged within the parameter space, wherein we propose a novel
method for determining the merging weights based on the variation ratio of
parameter matrices before and after fine-tuning. We validate our approach using
three prominent chat LLMs with diverse architectures and scales, namely
, , and
. Experimental results spanning various chat domains
demonstrate the superiority of across a broad
spectrum of chat LLMs at 7B and 34B scales, even surpassing and approaching . Our code, model
weights, and data are openly accessible at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要