谷歌浏览器插件
订阅小程序
在清言上使用

Fairness Concerns in App Reviews: A Study on AI-based Mobile Apps

CoRR(2024)

引用 0|浏览19
暂无评分
摘要
Fairness is one of the socio-technical concerns that must be addressed in software systems. Considering the popularity of mobile software applications (apps) among a wide range of individuals worldwide, mobile apps with unfair behaviors and outcomes can affect a significant proportion of the global population, potentially more than any other type of software system. Users express a wide range of socio-technical concerns in mobile app reviews. This research aims to investigate fairness concerns raised in mobile app reviews. Our research focuses on AI-based mobile app reviews as the chance of unfair behaviors and outcomes in AI-based mobile apps may be higher than in non-AI-based apps. To this end, we first manually constructed a ground-truth dataset, including 1,132 fairness and 1,473 non-fairness reviews. Leveraging the ground-truth dataset, we developed and evaluated a set of machine learning and deep learning models that distinguish fairness reviews from non-fairness reviews. Our experiments show that our best-performing model can detect fairness reviews with a precision of 94%. We then applied the best-performing model on approximately 9.5M reviews collected from 108 AI-based apps and identified around 92K fairness reviews. Next, applying the K-means clustering technique to the 92K fairness reviews, followed by manual analysis, led to the identification of six distinct types of fairness concerns (e.g., ‘receiving different quality of features and services in different platforms and devices’ and ‘lack of transparency and fairness in dealing with user-generated content’). Finally, the manual analysis of 2,248 app owners’ responses to the fairness reviews identified six root causes (e.g., ‘copyright issues’) that app owners report to justify fairness concerns.
更多
查看译文
关键词
Fairness,AI-based Mobile Apps,App Reviews,Machine Learning,Deep Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要