What do Bias Measures Measure?

Sunipa Dev, Emily Sheng,Jieyu Zhao,Jiao Sun, Yu Hou,Mattie Sanseverino,Jiin Kim, Nanyun Peng,Kai-Wei Chang

ArXiv(2021)

引用 0|浏览16
暂无评分
摘要
Natural Language Processing (NLP) models propagate social biases about protected attributes such as gender, race, and nationality. To create interventions and mitigate these biases and associated harms, it is vital to be able to detect and measure such biases. While many existing works propose bias evaluation methodologies for different tasks, there remains a need to cohesively understand what biases and normative harms each of these measures captures and how different measures compare. To address this gap, this work presents a comprehensive survey of existing bias measures in NLP as a function of the associated NLP tasks, metrics, datasets, and social biases and corresponding harms. This survey also organizes metrics into different categories to present advantages and disadvantages. Finally, we propose a documentation standard for bias measures to aid their development, categorization, and appropriate usage.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络