Safety Anchor: Defending Harmful Fine-tuning via Geometric Bottlenecks
arXiv:2605.05995v1 Announce Type: cross
Abstract: The safety alignment of Large Language Models (LLMs) remains vulnerable to Harmful Fine-tuning (HFT). While existing defenses impose constraints on parameters, gradients, or internal representations, w…