谷歌浏览器插件
订阅小程序
在清言上使用

Distribution-aware Fairness Test Generation

JOURNAL OF SYSTEMS AND SOFTWARE(2024)

引用 0|浏览34
暂无评分
摘要
This work addresses how to validate group fairness in image recognition software. We propose a distribution-aware fairness testing approach (called DISTROFAIR) that systemati-cally exposes class-level fairness violations in image classifiers via a synergistic combination of out-of-distribution (OOD) testing and semantic-preserving image mutation. Distrofairauto-matically learns the distribution (e.g., number/orientation) of objects in a set of images and systematically mutates objects in the images to become OOD using three semantic-preserving image mutations - object deletion, object insertion and object rotation. We evaluate Distrofair with two well-known datasets (CityScapes and MS-COCO) and three commercial image recog-nition software (namely, Amazon Rekognition, Google Cloud Vision and Azure Computer Vision) and find that at least 21 % of images generated by Distrofair result in class-level fairness violations. Distrofair is up to 2.3x more effective than the baseline (generation of images within the observed distribution). Finally, we evaluated the semantic validity of our approach via a user study with 81 participants, using 30 real images and 30 corresponding mutated images generated by Distrofair and found that the generated images are 80 % as realistic as the original images.
更多
查看译文
关键词
Software testing,Fairness testing,Computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要