OR-VSKC: Resolving Visual-Semantic Knowledge Conflicts in Operating Rooms with Synthetic Data-Guided Alignment
arXiv:2506.22500v2 Announce Type: replace-cross
Abstract: Automated identification of surgical safety risks is critical for improving patient outcomes; however, Multimodal Large Language Models (MLLMs) frequently suffer from Visual-Semantic Knowledge Conflicts (VS-KC), a phenomenon where models possess safety knowledge but fail to activate it during visual inspection. Investigating this alignment gap in operating rooms (ORs) is impeded by a critical bottleneck: the scarcity and privacy constraints of real-world OR data depicting safety violations. To address this, we introduce OR-VSKC, a benchmark for studying VS-KC and surgical risk perception in strictly regulated OR environments. Constructed via our Protocol-to-Pixel Generative Framework, OR-VSKC comprises 28,190 high-fidelity synthetic images grounded in authoritative safety standards, complemented by a 713-image expert-authored challenge subset validated by multiple experts. The full benchmark is built from authentic OR contexts drawn from the 4D-OR and CAMMA-MVOR datasets, where the 4D-OR-based portion serves as the primary benchmark core and the CAMMA-MVOR-based portion is reserved for external validation and cross-dataset generalization analysis. Evaluations of state-of-the-art MLLMs reveal substantial reliability gaps even in advanced generalist models. Furthermore, experiments show that fine-tuning on OR-VSKC effectively mitigates VS-KC and enables robust generalization to unseen camera viewpoints. We open-source the code and dataset to support reproducible research in safety-critical medical environments. The source code is available at https://github.com/zgg2577/VS-KC.