Putting wake words to bed - We speak wake words with systematically varied prosody, but CUIs don't listen.

CUI(2021)

引用 2|浏览7
暂无评分
摘要
‘Wake words’ such as "Alexa" or "Hey Siri", as conversation design elements, mimic the interactionally rich ‘summons-answer’ sequence in natural conversation, but their function amounts to little more than a button-push: simply activating the interface. In practice, however, users vocally overdesign their wake words with all the detail of a ‘real’ interactional summons. We hear users uttering wake words with a specific prosody and intonation, as though for a particular recipient in a particular social/pragmatic context. This presents a puzzle for designers of conversational user interfaces (CUIs). Previous research suggests that expert users simplify their talk when interacting with CUIs, but with wake words we observe the opposite. When users do the extra interactional work of varying their wake words in ways that seem ‘recipient designed’ for a specific other, does that suggest that designers are successfully eliciting natural interaction from users, or is it violating user expectations? Our two case studies highlight how the mismatch between user expectations and the limitations of how wake words are currently implemented can lead to cascades of interactional trouble, especially in multi-party conversations. We argue that designers should find new ways to activate CUIs that align users’ expectations with conversational system design.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要