Discovering What You Can Control: Interventional Boundary Discovery for Reinforcement Learning
arXiv:2603.18257v2 Announce Type: replace-cross
Abstract: When an RL agent's observations contain distractors driven by the same confounders as its true state, observational data alone cannot identify which dimensions the agent controls. In our benchmarks, even state-conditioned observational selectors can collapse when distractors mimic controllable state variables. We propose Interventional Boundary Discovery (IBD), which treats the agent's own action channel as a source of randomized interventions: randomizing actions implements an interventional contrast, and per-dimension two-sample tests with FDR correction produce a binary mask over observation dimensions. Across 12 continuous-control settings with up to 100 distractors, IBD matches oracle return in 11 of 12 settings, while observational baselines including mutual information, state-conditioned forward models, and gradient-based sensitivity often underperform simply passing the full observation to SAC.