Auditing Large Language Models for Enhanced Text-Based Stereotype Detection and Probing-Based Bias Evaluation
CoRR(2024)
摘要
Recent advancements in Large Language Models (LLMs) have significantly
increased their presence in human-facing Artificial Intelligence (AI)
applications. However, LLMs could reproduce and even exacerbate stereotypical
outputs from training data. This work introduces the Multi-Grain Stereotype
(MGS) dataset, encompassing 51,867 instances across gender, race, profession,
religion, and stereotypical text, collected by fusing multiple previously
publicly available stereotype detection datasets. We explore different machine
learning approaches aimed at establishing baselines for stereotype detection,
and fine-tune several language models of various architectures and model sizes,
presenting in this work a series of stereotypes classifier models for English
text trained on MGS. To understand whether our stereotype detectors capture
relevant features (aligning with human common sense) we utilise a variety of
explanainable AI tools, including SHAP, LIME, and BertViz, and analyse a series
of example cases discussing the results. Finally, we develop a series of
stereotype elicitation prompts and evaluate the presence of stereotypes in text
generation tasks with popular LLMs, using one of our best performing previously
presented stereotypes detectors. Our experiments yielded several key findings:
i) Training stereotype detectors in a multi-dimension setting yields better
results than training multiple single-dimension classifiers.ii) The integrated
MGS Dataset enhances both the in-dataset and cross-dataset generalisation
ability of stereotype detectors compared to using the datasets separately. iii)
There is a reduction in stereotypes in the content generated by GPT Family LLMs
with newer versions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要