Causal Bandits with General Causal Models and Interventions
International Conference on Artificial Intelligence and Statistics(2024)
Abstract
This paper considers causal bandits (CBs) for the sequential design of
interventions in a causal system. The objective is to optimize a reward
function via minimizing a measure of cumulative regret with respect to the best
sequence of interventions in hindsight. The paper advances the results on CBs
in three directions. First, the structural causal models (SCMs) are assumed to
be unknown and drawn arbitrarily from a general class ℱ of
Lipschitz-continuous functions. Existing results are often focused on
(generalized) linear SCMs. Second, the interventions are assumed to be
generalized soft with any desired level of granularity, resulting in an
infinite number of possible interventions. The existing literature, in
contrast, generally adopts atomic and hard interventions. Third, we provide
general upper and lower bounds on regret. The upper bounds subsume (and
improve) known bounds for special cases. The lower bounds are generally
hitherto unknown. These bounds are characterized as functions of the (i) graph
parameters, (ii) eluder dimension of the space of SCMs, denoted by
dim(ℱ), and (iii) the covering number of the
function space, denoted by cn(ℱ). Specifically, the
cumulative achievable regret over horizon T is 𝒪(K
d^L-1√(Tdim(ℱ) log( cn(ℱ)))),
where K is related to the Lipschitz constants, d is the graph's maximum
in-degree, and L is the length of the longest causal path. The upper bound is
further refined for special classes of SCMs (neural network, polynomial, and
linear), and their corresponding lower bounds are provided.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined