Leveraging Arbitrary Data Sources for AI-Generated Image Detection Without Sacrificing Generalization
arXiv:2603.00717v2 Announce Type: replace
Abstract: The accelerating advancement of generative models has introduced new challenges for detecting AI-generated images, especially in real-world scenarios where novel generation techniques emerge rapidly. Existing learning paradigms are likely to make classifiers data-dependent, resulting in narrow decision margins and, consequently, limited generalization ability to unseen generative models. We observe that both real and generated images intend to form clustered low-dimensional manifolds within high-level feature spaces extracted by pre-trained visual encoders. Building on this observation, we propose a single-class attribution modeling framework that first amplifies the intrinsic differences between real and generated images by constructing a compact attribution space from any single-class training set, either composed of real images or generated ones, and then establishes a more stable decision boundary upon the enlarged separation. This process enhances class distinction and mitigates the reliance on generator-specific artifacts, thereby improving cross-model generalization. Extensive experiments show that our method generalizes well across various unseen generative models, outperforming existing detectors by as much as 7.21% in accuracy and 7.20% in cross-model generalization.