LocalLLaMASupercharging LLM inference on Google TPUs: Achieving 3X speedups with diffusion-style speculative decoding- Google Developers Blog /u/eternviking / May 5, 2026 submitted by /u/eternviking [link] [comments]