谷歌浏览器插件
订阅小程序
在清言上使用

Misinformation in Third-party Voice Applications.

PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON CONVERSATIONAL USER INTERFACES, CUI 2023(2023)

引用 0|浏览10
暂无评分
摘要
This paper investigates the potential for spreading misinformation via third-party voice applications in voice assistant ecosystems such as Amazon Alexa and Google Assistant. Our work fills a gap in prior work on privacy issues associated with third-party voice applications, looking at security issues related to outputs from such applications rather than compromises to privacy from user inputs. We define misinformation in the context of third-party voice applications and implement an infrastructure for testing third-party voice applications using automated natural language interaction. Using our infrastructure, we identify — for the first time — several instances of misinformation in third-party voice applications currently available on the Google Assistant and Amazon Alexa platforms. We then discuss the implications of our work for developing measures to pre-empt the threat of misinformation and other types of harmful content in third-party voice assistants becoming more significant in the future.
更多
查看译文
关键词
voice assistants,online harm,misinformation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要