| I'm building a native MLX implementation of DFlash (paper) for Apple Silicon. A small draft model generates 16 tokens in parallel via block diffusion, the target verifies them in one forward pass. Output is bit-for-bit identical to baseline (greedy exact argmax match). Setup: M5 Max, 64GB, MLX, no CUDA. ResultsQwen3.5-9B bf16
Qwen3.5-4B bf16
The 4B actually gets faster at longer generation. The model is small enough that the draft/verify balance stays healthy as context grows. Qwen3.5-27B quantized
8bit gives better speedup ratios than 4bit. int4 makes the verify so fast that the bf16 draft becomes the bottleneck. With int8, the draft/verify balance is healthier. All numbers are generation only (first token to last token, no prefill). Acceptance around 80-87% across all models. What I builtNo DFlash MLX implementation existed. I wrote the runtime from scratch. What actually moved the numbers: head_dim=256 patch. Qwen3.5-9B uses head_dim=256, which MLX's steel_attention didn't support. A 2-line patch unlocked the fast SDPA path. Sync elision. Restructured the pipeline from 2 GPU→CPU syncs per cycle to 1. At 80+ tok/s each sync costs ~0.5ms. Packed QKV projection. 3 matmuls → 1 matmul + split. Fewer kernel dispatches per layer. Lessons on Apple SiliconOn unified memory everything is bandwidth-bound, which changes the speculative decoding game: Custom Metal kernels (batched-GEMV, fused gated SiLU, custom SDPA) all came back 0.5 to 0.8x slower than stock MLX steel GEMM. Ended up reverting all of them. Verify cost is almost flat from 4 to 16 tokens (57ms vs 59ms). Weight loading dominates, not token count. "Verify fewer tokens when confidence is low" doesn't help here. On quantized models, the optimization landscape flips: the draft (bf16) becomes slower than the verify (int4/int8). This is the opposite of the bf16 case and is a structural limitation of speculative decoding on bandwidth-bound hardware with quantized targets. Currently working onDraft compression/distillation for the 27B to fix the bf16 draft bottleneck on quantized targets. Long context stability. Speedup degrades past 2K tokens due to KV cache growth. MoE models. DFlash drafts exist for Qwen3.5-35B-A3B (35B total, 3B active). Verify cost of a small model, quality of a large one. Everything is still very much under construction. Will open source when ready. [link] [comments] |