Using Item Response Theory to Measure Gender and Racial Bias of a BERT-based Automated English Speech Assessment System

Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)(2022)

引用 0|浏览35
暂无评分
摘要
Recent advances in natural language processing and transformer-based models have made it easier to implement accurate, automated English speech assessments. Yet, without careful examination, applications of these models may exacerbate social prejudices based on gender and race. This study addresses the need to examine potential biases of transformer-based models in the context of automated English speech assessment. For this purpose, we developed a BERT-based automated speech assessment system and investigated gender and racial bias of examinees’ automated scores. Gender and racial bias was measured by examining differential item functioning (DIF) using an item response theory framework. Preliminary results, which focused on a single verbal-response item, showed no statistically significant DIF based on gender or race for automated scores.
更多
查看译文
关键词
item response theory,gender,racial bias,assessment,bert-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要