Crowdsourcing-Based Web Accessibility Evaluation with Golden Maximum Likelihood Inference.

PACMHCI(2018)

引用 12|浏览49
暂无评分
摘要
Web accessibility evaluation examines how well websites comply with accessibility guidelines which help people with disabilities to perceive, navigate and contribute to the Web. This demanding task usually requires manual assessment by experts with many years of training and experience. However, not enough experts are available to carry out the increasing number of evaluation projects while non-experts often have different opinions about the presence of accessibility barriers. Addressing these issues, we introduce a crowdsourcing system with a novel truth inference algorithm to derive reliable and accurate assessments from conflicting opinions of evaluators. Extensive evaluation on 23,901 complex tasks assessed by 50 people with and without disabilities shows that our approach outperforms state of the art approaches. In addition, we conducted surveys to identify frequent barriers that people with disabilities are facing in their daily lives and the difficulty to access Web pages when they encounter these barriers. The frequencies and severities of barriers correlate with their derived importance in our evaluation project.
更多
查看译文
关键词
collaborative work,crowdsourcing,disability,evaluation system,user experience,web accessibility
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要