Mobile Robot Exploration Without Maps via Out-of-Distribution Deep Reinforcement Learning

arXiv:2402.05066v2 Announce Type: replace Abstract: Autonomous Mobile Robot (AMR) navigation in dynamic environments that may be GPS denied, without a-priori maps, is an unsolved problem with potential to improve humanity's capabilities. Conventional modular methods are computationally inefficient, and require explicit feature extraction and engineering that inhibit generalization and deployment at scale. We present an Out-of-Distribution (OOD) Deep Reinforcement Learning (DRL) approach that includes functionality in unstructured terrain and dynamic obstacle avoidance capabilities. We leverage accelerated simulation training in a racetrack with a transition probability to parameterize spatial reasoning with intrinsic exploratory behavior, in a compact, computationally efficient Artificial Neural Network (ANN), which we transfer zero-shot with a reward component to mitigate differences between simulation and real world physics. Our approach enables utility without a separate high-level planner or real-time cartography and utilizes a fraction of the computation resources of modular methods, enabling execution in a range of AMRs with different embedded computer payloads.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top