AMMA: A Multi-Chiplet Memory-Centric Architecture for Low-Latency 1M Context Attention Serving
arXiv:2604.26103v2 Announce Type: replace-cross
Abstract: All current LLM serving systems place the GPU at the center, from production-level attention-FFN disaggregation to NVIDIA’s Rubin GPU-LPU heterogeneous platform. Even academic PIM/PNM proposals…