Attention Editing: A Versatile Framework for Cross-Architecture Attention Conversion
arXiv:2604.05688v1 Announce Type: cross
Abstract: Key-Value (KV) cache memory and bandwidth increasingly dominate large language model inference cost in long-context and long-generation regimes. Architectures such as multi-head latent attention (MLA) …