SUG-Occ: Explicit Semantics and Uncertainty Guided Sparse Learning for Efficient 3D Occupancy Prediction
arXiv:2601.11396v5 Announce Type: replace
Abstract: 3D semantic occupancy prediction has emerged as a critical perception task for autonomous driving due to its ability to offer voxel-level semantic and geometric understanding of the environment. However, such a refined representation for large-scale scenes incurs prohibitive computation, posing a significant challenge to practical real-time deployment. To address this, we propose SUGOcc, an explicit semantics and uncertainty guided sparse learning framework for efficient occupancy prediction, which exploits the inherent sparsity of 3D scenes to reduce redundant computation while maintaining geometric and semantic integrity. Specifically, we first utilize semantic and uncertainty priors to suppress image projections from free space while employing explicit unsigned distance encoding to enhance geometric consistency, thereby producing a structurally sparse representation. Secondly, we introduce a cascade sparse completion module to enable efficient coarse-to-fine reasoning over the sparse representation via hyper cross sparse convolution, generative upsampling and adaptive pruning. Finally, we propose an object contextual representation (OCR) based mask decoder that refines the voxel-wise predictions through lightweight query-context interactions, thereby avoiding expensive attention operations over volumetric features. Extensive experiments on SemanticKITTI and Occ3D-Nuscenes benchmark demonstrate that the proposed approach outperforms the baselines, achieving notable improvements in both accuracy and efficiency across datasets.