Learning by Doing: An Online Causal Reinforcement Learning Framework with Causal-Aware Policy
CoRR(2024)
Abstract
As a key component to intuitive cognition and reasoning solutions in human
intelligence, causal knowledge provides great potential for reinforcement
learning (RL) agents' interpretability towards decision-making by helping
reduce the searching space. However, there is still a considerable gap in
discovering and incorporating causality into RL, which hinders the rapid
development of causal RL. In this paper, we consider explicitly modeling the
generation process of states with the causal graphical model, based on which we
augment the policy. We formulate the causal structure updating into the RL
interaction process with active intervention learning of the environment. To
optimize the derived objective, we propose a framework with theoretical
performance guarantees that alternates between two steps: using interventions
for causal structure learning during exploration and using the learned causal
structure for policy guidance during exploitation. Due to the lack of public
benchmarks that allow direct intervention in the state space, we design the
root cause localization task in our simulated fault alarm environment and then
empirically show the effectiveness and robustness of the proposed method
against state-of-the-art baselines. Theoretical analysis shows that our
performance improvement attributes to the virtuous cycle of causal-guided
policy learning and causal structure learning, which aligns with our
experimental results.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined