OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
CoRR(2024)
摘要
Text-to-image generative models are becoming increasingly popular and
accessible to the general public. As these models see large-scale deployments,
it is necessary to deeply investigate their safety and fairness to not
disseminate and perpetuate any kind of biases. However, existing works focus on
detecting closed sets of biases defined a priori, limiting the studies to
well-known concepts. In this paper, we tackle the challenge of open-set bias
detection in text-to-image generative models presenting OpenBias, a new
pipeline that identifies and quantifies the severity of biases agnostically,
without access to any precompiled set. OpenBias has three stages. In the first
phase, we leverage a Large Language Model (LLM) to propose biases given a set
of captions. Secondly, the target generative model produces images using the
same set of captions. Lastly, a Vision Question Answering model recognizes the
presence and extent of the previously proposed biases. We study the behavior of
Stable Diffusion 1.5, 2, and XL emphasizing new biases, never investigated
before. Via quantitative experiments, we demonstrate that OpenBias agrees with
current closed-set bias detection methods and human judgement.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要