More Efficient Off-Policy Evaluation through Regularized Targeted Learning
pp. 654-663, 2019.
EI
Abstract:
We study the problem of off-policy evaluation (OPE) in Reinforcement Learning (RL), where the aim is to estimate the performance of a new policy given historical data that may have been generated by a different policy, or policies. In particular, we introduce a novel doubly-robust estimator for the OPE problem in RL, based on the Target...More
Code:
Data:
Tags
Comments