Verifiably Safe and Trusted Human-AI Systems: A Socio-technical Perspective.

TAS(2023)

Cited 0|Views16
No score
Abstract
Replacing human decision-making with machine decision-making results in challenges associated with stakeholders' trust in AI systems that interact with and keep the human user in the loop. We refer to such systems as Human-AI Systems (HAIS) and argue that technical safety and social trustworthiness of a HAIS are key to its wide-spread adoption by society. To develop a verifiably safe and trusted HAIS, it is important to understand how different stakeholders perceive an autonomous system (AS) as trusted, and how the context of application affects their perceptions. Technical approaches to meet trust and safety concerns are widely investigated and under-used in the context of measuring users' trust in autonomous AI systems. Interdisciplinary socio-technical approaches, grounded in social science (trust) and computer science (safety), are less considered in HAIS investigations. This paper aims to elaborate on the need for the application of formal methods, for ensuring safe behaviour of HAIS, based on the real-life understanding of users about trust, and analysing trust dynamics. This work puts forward core challenges in this area and presents a research agenda on verifiably safe and trusted human-AI systems.
More
Translated text
Key words
Trust, Human-AI Systems, Safety, Verification
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined