Causal Bandit Over Unknown Graphs: Upper Confidence Bounds With Backdoor Adjustment

arXiv:2502.02020v3 Announce Type: replace Abstract: The causal bandit problem seeks to identify, through sequential experimentation, an intervention that maximizes the expected reward in a causal system modeled by a directed acyclic graph (DAG). Existing methods typically assume that the causal graph is known or impose restrictive structural assumptions. In this paper, we study causal bandit problems when the causal graph is unknown. We first consider Gaussian DAG models without latent confounders. By combining observational and experimental data collected sequentially during the bandit process, we identify candidate backdoor adjustment sets for each intervention arm. These sets enable estimation of causal effects and construction of upper confidence bounds that integrate information from both data sources. Based on these estimates, we propose a new algorithm, termed backdoor-adjustment upper confidence bound (BA-UCB), for sequential intervention selection. We establish finite-time upper bounds on the cumulative regret of BA-UCB, showing improved rates and substantially relaxed dependency on the number of intervention arms compared to standard multi-armed bandit methods. We further extend the methodology and theoretical guarantees to settings with latent confounders, where the observed variables are modeled by an acyclic directed mixed graph. Simulation studies demonstrate that BA-UCB achieves substantially lower cumulative regret and favorable computational efficiency relative to existing approaches.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top