To Trust or Not to Trust My AI Based Voice Assistant: Dealing with Consumer Uncertainties: An Abstract

From Micro to Macro: Dealing with Uncertainties in the Global Marketplace(2022)

引用 0|浏览3
暂无评分
摘要
Despite the increasing adoption rate of AI-based technologies, as well as their estimated future growth, very few research has explored the factors that influence the usage of Voice Activated assistants (VAs) in daily life (McLean and Osei-Frimpong 2019; Moriuchi 2019), and the factors affecting users’ trust with VAs interactions remain underexplored (Foehr and Germelmann 2020). To this aim, this study integrates Human-Computer Interaction literature on the functional and hedonic attributes of the system and individual’s perceived privacy risks, by adopting a Para-Social Relationship Theory perspective (Turner 1993) and it investigates the drivers of users’ trust toward VAs. According to Wirtz et al (2018), when interacting with an AI-based personal assistants, functional elements, such as usefulness and perceived ease of use, will appear to be given in most cases, but would be a barrier if not provided at a level expected by consumers. However, the peculiarities of AI agents, and their ability to engage in conversational-based communication with users, go beyond a functional approach and require a more social-relationship perspective (Foehr and Germelmann 2020). A large body of research has outlined how individuals apply social roles and treat computes like a social entity (Nass and Brave 2005; Nass and Moon 2000). This is especially true when technology mimics human-like attributes (Li 2015). As VAs use natural language, interact with users in real-time, and are characterised by human- like attributes such as voice, it is possible to expect that interactions with VAs may elicit social responses, such as a sense of social presence (Chattaramn et al. 2018). Moreover, being an AI-based technology, VAs are more likely to be perceived as intelligent and skilful (van Doorn et al 2016). While these social elements can foster trust building, the process may be affected by the perceived risks surrounding privacy and security (Lei et al. 2018). Building on the above, the research develops and tests a comprehensive model of theory-drivers of users’ trust, attitude and intentions to use VAs on a sample of 547 VA-users. In addition, a qualitative study, involving in-depth interviews, is conducted to further investigate trust developing mechanisms with smart technology. The study identifies social presence and social cognition to be the main drivers of user trust towards VAs. Further, it shows the existence of different sources of trustworthiness (Foher and Germelmann 2020) that allow individuals to direct their privacy concerns toward VA producers rather than toward the AI agent (Belanche et al. 2020). Finally, the research confirms the role of functional and hedonic elements in the acceptance of advanced smart technology (Wirtz et al. 2018, 2019), while highlighting the role of emotional reactions in driving users’ attitudes towards human-AI agents’ interactions (van Pinxteren et al. 2019).
更多
查看译文
关键词
AI, Voice-based assistant, Trust, Artificial intelligence, Social cognition, Privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要