Delayed homomorphic reinforcement learning for environments with delayed feedback

arXiv:2604.03641v2 Announce Type: replace-cross Abstract: Reinforcement learning in real-world systems often involves delayed feedback, which breaks the Markov assumption and impedes both learning and control. Canonical augmentation-based approaches cause state-space explosion, which imposes a severe sample-complexity burden. Despite recent progress, state-of-the-art augmentation-based baselines either mainly alleviate the burden on the critic or rely on non-unified treatments for the actor and critic. In this study, we propose delayed homomorphic reinforcement learning (DHRL), a framework grounded in MDP homomorphisms that defines a belief-equivalence relation over the augmented state space to collapse control-redundant augmented states. In principle, this yields exact abstraction under deterministic dynamics and approximate abstraction under stochastic dynamics, enabling both the actor and critic to benefit from a structured abstraction mechanism. In finite domains, exact abstraction preserves optimality and recovers the delay-free sample-complexity order, whereas approximate abstraction admits a value-loss bound on the resulting policy. For continuous domains, we introduce deep delayed homomorphic policy gradient (D$^2$HPG), a deep actor-critic instantiation of the DHRL framework. Experiments on continuous-control tasks in MuJoCo show that D$^2$HPG outperforms strong augmentation-based baselines.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top