Handling Reward Misspecification in the Presence of Expectation Mismatch
CoRR(2024)
Abstract
Detecting and handling misspecified objectives, such as reward functions, has
been widely recognized as one of the central challenges within the domain of
Artificial Intelligence (AI) safety research. However, even with the
recognition of the importance of this problem, we are unaware of any works that
attempt to provide a clear definition for what constitutes (a) misspecified
objectives and (b) successfully resolving such misspecifications. In this work,
we use the theory of mind, i.e., the human user's beliefs about the AI agent,
as a basis to develop a formal explanatory framework called Expectation
Alignment (EAL) to understand the objective misspecification and its causes.
Our framework not only acts as an explanatory framework for existing
works but also provides us with concrete insights into the limitations of
existing methods to handle reward misspecification and novel solution
strategies. We use these insights to propose a new interactive algorithm that
uses the specified reward to infer potential user expectations about the system
behavior. We show how one can efficiently implement this algorithm by mapping
the inference problem into linear programs. We evaluate our method on a set of
standard Markov Decision Process (MDP) benchmarks.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined