Explainable Human-AI Interaction: A Planning Perspective
Synthesis Lectures on Artificial Intelligence and Machine Learning(2024)
Abstract
From its inception, AI has had a rather ambivalent relationship with humans
– swinging between their augmentation and replacement. Now, as AI technologies
enter our everyday lives at an ever increasing pace, there is a greater need
for AI systems to work synergistically with humans. One critical requirement
for such synergistic human-AI interaction is that the AI systems be explainable
to the humans in the loop. To do this effectively, AI agents need to go beyond
planning with their own models of the world, and take into account the mental
model of the human in the loop. Drawing from several years of research in our
lab, we will discuss how the AI agent can use these mental models to either
conform to human expectations, or change those expectations through explanatory
communication. While the main focus of the book is on cooperative scenarios, we
will point out how the same mental models can be used for obfuscation and
deception. Although the book is primarily driven by our own research in these
areas, in every chapter, we will provide ample connections to relevant research
from other groups.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined