UNQOVERing Stereotyping Biases via Underspecified Questions

EMNLP, pp. 3475-3489, 2020.

Other Links: arxiv.org|dblp.uni-trier.de|academic.microsoft.com

Abstract:

While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework to probe and quantify biases through underspecified questions. We show that a naive use of model scores can lead to incorrect bias estimates ...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments