Multilingual Safety Alignment via Self-Distillation
arXiv:2605.02971v1 Announce Type: cross
Abstract: Large language models (LLMs) exhibit severe multilingual safety misalignment: they possess strong safeguards in high-resource languages but remain highly vulnerable to jailbreak attacks in low-resource…