Multilingual large language models leak human stereotypes across language boundaries
arxiv(2023)
Abstract
Multilingual large language models have been increasingly popular for their
proficiency in processing and generating text across various languages.
Previous research has shown that the presence of stereotypes and biases in
monolingual large language models can be attributed to the nature of their
training data, which is collected from humans and reflects societal biases.
Multilingual language models undergo the same training procedure as monolingual
ones, albeit with training data sourced from various languages. This raises the
question: do stereotypes present in one social context leak across languages
within the model? In our work, we first define the term “stereotype leakage”
and propose a framework for its measurement. With this framework, we
investigate how stereotypical associations leak across four languages: English,
Russian, Chinese, and Hindi. To quantify the stereotype leakage, we employ an
approach from social psychology, measuring stereotypes via group-trait
associations. We evaluate human stereotypes and stereotypical associations
manifested in multilingual large language models such as mBERT, mT5, and
GPT-3.5. Our findings show a noticeable leakage of positive, negative, and
non-polar associations across all languages. Notably, Hindi within multilingual
models appears to be the most susceptible to influence from other languages,
while Chinese is the least. Additionally, GPT-3.5 exhibits a better alignment
with human scores than other models. WARNING: This paper contains model outputs
which could be offensive in nature.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined