Fair Wasserstein Coresets
CoRR(2023)
Abstract
Data distillation and coresets have emerged as popular approaches to generate
a smaller representative set of samples for downstream learning tasks to handle
large-scale datasets. At the same time, machine learning is being increasingly
applied to decision-making processes at a societal level, making it imperative
for modelers to address inherent biases towards subgroups present in the data.
While current approaches focus on creating fair synthetic representative
samples by optimizing local properties relative to the original samples, their
impact on downstream learning processes has yet to be explored. In this work,
we present fair Wasserstein coresets (FWC), a novel coreset approach which
generates fair synthetic representative samples along with sample-level weights
to be used in downstream learning tasks. FWC uses an efficient majority
minimization algorithm to minimize the Wasserstein distance between the
original dataset and the weighted synthetic samples while enforcing demographic
parity. We show that an unconstrained version of FWC is equivalent to Lloyd's
algorithm for k-medians and k-means clustering. Experiments conducted on both
synthetic and real datasets show that FWC: (i) achieves a competitive
fairness-utility tradeoff in downstream models compared to existing approaches,
(ii) improves downstream fairness when added to the existing training data and
(iii) can be used to reduce biases in predictions from large language models
(GPT-3.5 and GPT-4).
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined