DARLING: Detection Augmented Reinforcement Learning with Non-Stationary Guarantees
arXiv:2604.16684v2 Announce Type: replace
Abstract: We study model-free reinforcement learning (RL) in non-stationary finite-horizon episodic Markov decision processes (MDPs) without prior knowledge of the non-stationarity. We focus on the piecewise stationary (PS) setting, where both rewards and transition dynamics can change at unknown times. We first revisit existing state-of-the-art approaches and identify theoretical and practical limitations that change the current landscape of performance guarantees. To characterize the difficulty of the problem, we establish the first minimax lower bounds for PS-RL in tabular and linear MDPs. We then introduce Detection Augmented Reinforcement Learning (DARLING), a modular wrapper for PS-RL that applies to both tabular and linear MDPs, without knowledge of the changes. In tabular MDPs, under change-point separability and reachability conditions, DARLING improves the best known dynamic regret bounds and matches our minimax lower bound. In linear MDPs, DARLING matches the minimax lower bound when the relevant reachability parameters are known, and our analysis clarifies the structural obstacles that distinguish this setting from the tabular case. Finally, through extensive experimentation across diverse non-stationary benchmarks, we show that DARLING consistently surpasses the state-of-the-art methods.