Labels Matter More Than Models: Rethinking the Unsupervised Paradigm in Time Series Anomaly Detection
arXiv:2511.16145v2 Announce Type: replace
Abstract: Time series anomaly detection (TSAD) is a critical data mining task often constrained by label scarcity. Consequently, current research predominantly focuses on Unsupervised Time-series Anomaly Detection (UTAD), relying on increasingly complex architectures to model normal data distributions. However, this algorithm-centric trend often overlooks the significant performance gains achievable from limited anomaly labels available in practical scenarios. This paper challenges the premise that algorithmic complexity is the optimal path for TSAD. Instead of proposing another intricate unsupervised model, we present a comprehensive benchmark and empirical study to rigorously compare supervised and unsupervised paradigms. To isolate the value of labels, we introduce \stand, a deliberately minimalist supervised baseline. Extensive experiments on five public datasets demonstrate that: (1) Labels matter more than models: under a limited labeling budget, simple supervised models significantly outperform complex state-of-the-art unsupervised methods; (2) Supervision yields higher returns: the performance gain from minimal supervision far exceeds the incremental gains from architectural innovations; and (3) Practicality: \stand~exhibits superior prediction consistency and anomaly localization compared to unsupervised counterparts. These findings advocate for a paradigm shift in TSAD research, urging the community to prioritize data-centric label utilization over purely algorithmic complexity. The code and benchmark are publicly available at https://github.com/EmorZz1G/STAND.