cs.AI, cs.AR, cs.DC, cs.PF

Predictive Multi-Tier Memory Management for KV Cache in Large-Scale GPU Inference

arXiv:2604.26968v1 Announce Type: cross
Abstract: Key-value (KV) cache memory management is the primary bottleneck limiting throughput and cost-efficiency in large-scale GPU inference serving. Current systems suffer from three compounding inefficienci…