PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of LLMs
CoRR(2024)
摘要
While Large language models (LLMs) have demonstrated considerable
capabilities across various natural language tasks, they often fall short of
the performance achieved by domain-specific state-of-the-art models. One
potential approach to enhance domain-specific capabilities of LLMs involves
fine-tuning them using corresponding datasets. However, this method can be both
resource and time-intensive, and not applicable to closed-source commercial
LLMs. In this paper, we propose Preference Adaptation for Enhancing
Domain-specific Abilities of LLMs (PANDA), a method designed to augment the
domain-specific capabilities of LLMs by leveraging insights from the response
preference of expert models without requiring fine-tuning. Our experimental
results reveal that PANDA significantly enhances the domain-specific ability of
LLMs on text classification and interactive decision tasks. Moreover, LLM with
PANDA even outperforms the expert model that being learned on 4 tasks of
ScienceWorld. This finding highlights the potential of exploring tuning-free
approaches to achieve weak-to-strong generalization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要