Spatial-Regularization-Aware Dual-Branch Collaborative Inference for Training-Free OVSS in Remote Sensing Imagery
arXiv:2601.21159v2 Announce Type: replace
Abstract: High-resolution remote sensing images contain densely distributed objects with pronounced scale variations and complex boundaries, which impose higher demands on both the geometric localization and semantic prediction capabilities of semantic segmentation models. Existing training-free open-vocabulary semantic segmentation (OVSS) methods typically fuse Contrastive Language-Image Pretraining (CLIP) and vision foundation models (VFMs) using one-way injection and shallow post-processing strategies, making it difficult to satisfy these requirements. To address this issue, we propose a spatial-regularization-aware dual-branch collaborative inference framework for training-free OVSS, termed SDCI. First, during feature encoding, SDCI introduces a cross-model attention fusion (CAF) module, which guides collaborative inference by injecting self-attention maps into each other. Second, we propose a bidirectional cross-graph diffusion refinement (BCDR) module that enhances the reliability of dual-branch segmentation scores through iterative random-walk diffusion. Finally, we incorporate low-level superpixel structures and develop a convex-optimization-based superpixel collaborative prediction (CSCP) mechanism to further refine object boundaries. Experiments on multiple remote sensing semantic segmentation benchmarks demonstrate that our method achieves better performance than existing approaches. Our code is available at https://github.com/yu-ni1989/SDCI.