Led to Mislead: Adversarial Content Injection for Attacks on Neural Ranking Models
arXiv:2605.01591v1 Announce Type: cross
Abstract: Neural Ranking Models (NRMs) are central to modern information retrieval but remain highly vulnerable to adversarial manipulation. Existing attacks often rely on heuristics or surrogate models, limitin…