RobustSora: De-Watermarked Benchmark for Robust AI-Generated Video Detection
arXiv:2512.10248v2 Announce Type: replace-cross
Abstract: The proliferation of AI-generated video models poses new challenges to information integrity and digital trust. A key confound, however, remains unaddressed: commercial generators embed visible overlay watermarks for provenance tracking, yet no existing benchmark controls for this variable, leaving open whether detectors learn genuine generation artefacts or merely associate watermark patterns with AI-generated labels. We present RobustSora, a benchmark of 6,500 manually verified videos in four categories: Authentic-Clean (A-C), Generated-Watermarked (G-W), Generated-DeWatermarked (G-DeW), and Authentic-Spoofed (A-S), sourced from Vript, DVF, and UltraVideo (authentic) and from Sora, Sora 2, Pika, Open-Sora 2, and KLing (generated). Two evaluation tasks isolate watermark effects: Task-I (Watermark Erasure Robustness) tests detection on watermark-removed AI videos; Task-II (Watermark Spoofing Robustness) measures false-alarm rates on authentic videos injected with fake watermarks. Across ten models spanning specialized detectors, transformer classifiers, and MLLMs, watermark manipulation induces accuracy changes of $-9.4$ to $+1.6$ pp (mean 6.6 pp; $p{<}0.01$ for 7/10 models on each task). A placebo control bounds inpainting-artefact confounds at $\le$2 pp, and a watermark-aware training augmentation recovers 3-4 pp on both tasks, together providing causal evidence that detectors actively rely on watermark cues. Per-generator breakdown shows that Sora 2 induces drops of $-11$ to $-14$ pp versus $-3$ to $-6$ pp for Pika and Open-Sora 2, indicating that watermark prominence, rather than detector architecture, is the principal driver of dependency. These results argue for watermark-aware evaluation and training in AIGC video detection. Dataset, evaluation code, and pretrained checkpoints will be released.