Breaking the Grid: Distance-Guided Reinforcement Learning in Large Discrete Action Spaces

arXiv:2602.08616v2 Announce Type: replace-cross Abstract: Reinforcement Learning (RL) is increasingly applied to large-scale decision-making problems like logistics, scheduling, and recommender systems, but existing algorithms struggle with the curse of dimensionality in such large discrete action spaces. We propose Distance-Guided Reinforcement Learning (DGRL), combining Sampled Dynamic Neighborhoods and Distance-Based Updates to enable efficient RL in problems with up to $10^{20}$ actions. Unlike prior methods, DGRL performs stochastic volumetric exploration and transforms policy optimization into a stable regression task, decoupling gradient variance from action space cardinality. On structured tasks, DGRL provably guarantees local value improvement. DGRL naturally generalizes to hybrid continuous-discrete action spaces. We demonstrate performance improvements of up to 66% against state-of-the-art benchmarks across regularly and irregularly structured environments, while simultaneously improving convergence speed and computational complexity.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top