Supercharging LLM inference on Google TPUs: Achieving 3X speedups with diffusion-style speculative decoding- Google Developers BlogBy /u/eternviking / May 5, 2026 submitted by /u/eternviking [link] [comments]