Detection Without Correction: A Robust Asymmetry in Activation-Based Hallucination Probing

arXiv:2604.13068v2 Announce Type: replace Abstract: Activation-based linear probing is widely proposed as a method for both detecting and correcting hallucinations in autoregressive language models. We present an empirical study across seven models spanning 117M to 7B parameters and three architecture families (GPT-2, Pythia, Qwen-2.5) that documents a robust asymmetry: linear probes can detect hallucination signals with above-chance accuracy in larger models, but activation steering along the probe-derived direction fails to correct hallucinations in 7 of 7 models tested. We further find that output-confidence baselines outperform activation probes on raw detection AUC at every model above 410M parameters, with the gap reaching 0.157 AUC for Pythia-6.9B. The probe's distinguishing value is therefore not detection accuracy but temporal positioning: probe signals are accessible at position zero (before any output tokens are produced), enabling pre-generation flagging that output-based methods structurally cannot provide. The temporal signal is statistically significant in two of seven models (Pythia-1.4B, p = 0.012; Qwen2.5-7B, p = 0.038) and absent in models below 400M parameters and in the base-only Pythia-6.9B. We position these findings as a clean negative result for the dominant probing-as-detection-and-control research direction and as initial evidence that probe-based methods occupy a complementary deployment niche, namely pre-generation flagging, rather than competing with output-based detectors on raw accuracy.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top