Flames: Benchmarking Value Alignment of LLMs in Chinese
arxiv(2023)
摘要
The widespread adoption of large language models (LLMs) across various
regions underscores the urgent need to evaluate their alignment with human
values. Current benchmarks, however, fall short of effectively uncovering
safety vulnerabilities in LLMs. Despite numerous models achieving high scores
and 'topping the chart' in these evaluations, there is still a significant gap
in LLMs' deeper alignment with human values and achieving genuine harmlessness.
To this end, this paper proposes a value alignment benchmark named Flames,
which encompasses both common harmlessness principles and a unique morality
dimension that integrates specific Chinese values such as harmony. Accordingly,
we carefully design adversarial prompts that incorporate complex scenarios and
jailbreaking methods, mostly with implicit malice. By prompting 17 mainstream
LLMs, we obtain model responses and rigorously annotate them for detailed
evaluation. Our findings indicate that all the evaluated LLMs demonstrate
relatively poor performance on Flames, particularly in the safety and fairness
dimensions. We also develop a lightweight specified scorer capable of scoring
LLMs across multiple dimensions to efficiently evaluate new models on the
benchmark. The complexity of Flames has far exceeded existing benchmarks,
setting a new challenge for contemporary LLMs and highlighting the need for
further alignment of LLMs. Our benchmark is publicly available at
https://github.com/AIFlames/Flames.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要