Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation
arxiv(2024)
摘要
Conditional independence (CI) constraints are critical for defining and
evaluating fairness in machine learning, as well as for learning unconfounded
or causal representations. Traditional methods for ensuring fairness either
blindly learn invariant features with respect to a protected variable (e.g.,
race when classifying sex from face images) or enforce CI relative to the
protected attribute only on the model output (e.g., the sex label). Neither of
these methods are effective in enforcing CI in high-dimensional feature spaces.
In this paper, we focus on a nascent approach characterizing the CI constraint
in terms of two Jensen-Shannon divergence terms, and we extend it to
high-dimensional feature spaces using a novel dynamic sampling strategy. In
doing so, we introduce a new training paradigm that can be applied to any
encoder architecture. We are able to enforce conditional independence of the
diffusion autoencoder latent representation with respect to any protected
attribute under the equalized odds constraint and show that this approach
enables causal image generation with controllable latent spaces. Our
experimental results demonstrate that our approach can achieve high accuracy on
downstream tasks while upholding equality of odds.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要