Human Performance Consequences of Normative and Contrastive Explanations: an Experiment in Machine Learning for Reliability Maintenance.
Artificial intelligence(2023)
Abstract
Decision aids based on artificial intelligence and machine learning can benefit human decisions and system performance, but can also provide incorrect advice, and invite operators to inappropriately rely on automation. This paper examined the extent to which example-based explanations could improve reliance on a decision aid that is based on machine learning. Participants engaged in a preventive maintenance task by providing their diagnosis of the conditions of three components of a hydraulic system. A decision aid based on machine learning provided advice but was not always reliable. Three explanation displays (baseline, normative, normative plus contrastive) were manipulated within-participants. With the normative explanation display, we found improvements in participants' decision time and subjective workload. With the addition of contrastive explanations, we found improvements in participants' hit rate and sensitivity in discriminating between correct and incorrect ML advice. Implications for the design of explainable interfaces to support human-AI interaction in data intensive environments are discussed.
MoreTranslated text
Key words
Human-AI interaction,Explainable AI,Automation reliance behavior,Automation transparency
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined