LangBiTe: A Platform for Testing Bias in Large Language Models
arxiv(2024)
摘要
The integration of Large Language Models (LLMs) into various software
applications raises concerns about their potential biases. Typically, those
models are trained on a vast amount of data scrapped from forums, websites,
social media and other internet sources, which may instill harmful and
discriminating behavior into the model. To address this issue, we present
LangBiTe, a testing platform to systematically assess the presence of biases
within an LLM. LangBiTe enables development teams to tailor their test
scenarios, and automatically generate and execute the test cases according to a
set of user-defined ethical requirements. Each test consists of a prompt fed
into the LLM and a corresponding test oracle that scrutinizes the LLM's
response for the identification of biases. LangBite provides users with the
bias evaluation of LLMs, and end-to-end traceability between the initial
ethical requirements and the insights obtained.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要