Online Content Moderation: Does Justice Need a Human Face?

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION(2024)

引用 0|浏览4
暂无评分
摘要
Approaches to content moderation online draw from models used to manage behavior in the offline world-undesirable content and behaviors are identified and sanctioned are issued as a means of deterrence. Recent discussions in both offline and online contexts have emphasized the limits of a sanction-based approach and highlighted the gains that would flow from building self-regulatory models within which users are encouraged to take personal responsibility for following rules. Our concern is whether a procedural justice model-a model which has increasingly been adopted in offline legal settings-can be used for content moderation online, and furthermore, if the benefits of this self-regulatory model persist in an online setting where algorithms play a more central role. We review recent studies which demonstrate that it is possible to promote self-governance by having platforms employ enforcement procedures that users experience as being procedurally just. The challenge of such procedures is that at least some of their features-having voice, receiving an explanation, treatment with respect-appear to be in conflict with a reliance on algorithms used by many online platforms. This review of the literature suggests that there is not necessarily an inherent conflict between the use of algorithms and the user experience of procedural justice. Drawing upon findings from recent empirical work in this space, we argue that the necessary antecedents for procedural justice can be built into algorithmic decision making used in platforms' content moderation efforts. Doing so, however, requires a nuanced understanding of how algorithms are viewed-both positively and negatively-in building trust during these decision making processes.
更多
查看译文
关键词
Procedural justice,social media,content moderation,self-regulation,algorithmic transparency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要