Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment.
ACM Conference on Fairness, Accountability and Transparency(2024)
Abstract
As AI assistants become increasingly sophisticated and deeply integrated into our lives, questions of trust rise to the forefront. In this paper, we build on philosophical studies of trust to investigate when user trust in AI assistants is justified. By moving beyond a focus on the technical artefact in isolation, we consider the broader societal system in which AI assistants are developed and deployed. We conceptualise user trust in AI assistants as encompassing two main targets, namely AI assistants and their developers. We argue that – as AI assistants become more human like and exhibit increased agency – discerning when user trust is justified requires consideration not only of competence, on the part of AI assistants and their developers, but also alignment between the competing interests, values or incentives of AI assistants, developers and users. To help users understand if and when their trust in the competence and alignment of AI assistants and developers is justified, we propose a sociotechnical approach that requires evidence to be collected at three levels: AI assistant design, organisational practices and third-party governance. Taken together, these measures can help harness the transformative potential of AI assistants while also ensuring their operation is ethical and value aligned.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined