More Than Catastrophic Forgetting: Integrating General Capabilities For Domain-Specific LLMs
CoRR(2024)
Abstract
The performance on general tasks decreases after Large Language Models (LLMs)
are fine-tuned on domain-specific tasks, the phenomenon is known as
Catastrophic Forgetting (CF). However, this paper presents a further challenge
for real application of domain-specific LLMs beyond CF, called General
Capabilities Integration (GCI), which necessitates the integration of both the
general capabilities and domain knowledge within a single instance. The
objective of GCI is not merely to retain previously acquired general
capabilities alongside new domain knowledge, but to harmonize and utilize both
sets of skills in a cohesive manner to enhance performance on domain-specific
tasks. Taking legal domain as an example, we carefully design three groups of
training and testing tasks without lacking practicability, and construct the
corresponding datasets. To better incorporate general capabilities across
domain-specific scenarios, we introduce ALoRA, which utilizes a multi-head
attention module upon LoRA, facilitating direct information transfer from
preceding tokens to the current one. This enhancement permits the
representation to dynamically switch between domain-specific knowledge and
general competencies according to the attention. Extensive experiments are
conducted on the proposed tasks. The results exhibit the significance of our
setting, and the effectiveness of our method.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined