Scaling Pretrained Representations Enables Label-Free Out-of-Distribution Detection Without Fine-Tuning
arXiv:2605.05638v1 Announce Type: new
Abstract: Models trained with deep learning often fail to signal when inputs fall outside their training data manifold, leading to unreliable predictions under distribution shift. Prior work suggests that effectiv…