How Teams Communicate about the Quality of ML Models: A Case Study at an International Technology Company

Proceedings of the ACM on Human-Computer Interaction(2021)

引用 3|浏览40
暂无评分
摘要
AbstractMachine learning (ML) has become a crucial component in software products, either as part of the user experience or used internally by software teams. Prior studies have explored how ML is affecting development team roles beyond data scientists, including user experience designers, program managers, developers and operations engineers. However, there has been little investigation of how team members in different roles on the team communicate about ML, in particular about the quality of models. We use the general term quality to look beyond technical issues of model evaluation, such as accuracy and overfitting, to any issue affecting whether a model is suitable for use, including ethical, engineering, operations, and legal considerations. What challenges do teams face in discussing the quality of ML models? What work practices mitigate those challenges? To address these questions, we conducted a mixed-methods study at a large software company, first interviewing15 employees in a variety of roles, then surveying 168 employees to broaden our understanding. We found several challenges, including a mismatch between user-focused and model-focused notions of performance, misunderstandings about the capabilities and limitations of evolving ML technology, and difficulties in understanding concerns beyond one's own role. We found several mitigation strategies, including the use of demos during discussions to keep the team customer-focused.
更多
查看译文
关键词
communicating ML models,communication,machine learning,presentation of work,quality of ML models quality,teams
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要