cs.DC, cs.LG

Accelerating Local LLMs on Resource-Constrained Edge Devices via Distributed Prompt Caching

arXiv:2602.22812v2 Announce Type: replace
Abstract: Since local LLM inference on resource-constrained edge devices imposes a severe performance bottleneck, this paper proposes distributed prompt caching to enhance inference performance by cooperativel…