GRASPrune: Global Gating for Budgeted Structured Pruning of Large Language Models
arXiv:2604.19398v1 Announce Type: new
Abstract: Large language models (LLMs) are expensive to serve because model parameters, attention computation, and KV caches impose substantial memory and latency costs. We present GRASPrune, a structured pruning …