Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment
CoRR(2024)
摘要
Aligning language models (LMs) based on human-annotated preference data is a
crucial step in obtaining practical and performant LM-based systems. However,
multilingual human preference data are difficult to obtain at scale, making it
challenging to extend this framework to diverse languages. In this work, we
evaluate a simple approach for zero-shot cross-lingual alignment, where a
reward model is trained on preference data in one source language and directly
applied to other target languages. On summarization and open-ended dialog
generation, we show that this method is consistently successful under
comprehensive evaluation settings, including human evaluation: cross-lingually
aligned models are preferred by humans over unaligned models on up to >70
evaluation instances. We moreover find that a different-language reward model
sometimes yields better aligned models than a same-language reward model. We
also identify best practices when there is no language-specific data for even
supervised finetuning, another component in alignment.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要