Can we talk about the reasoning token format chaos?

  • Qwen/DeepSeek: <think>...</think>
  • Gemma: <|channel>...<channel|> Ok weird but sure.
  • Gemma again, sometimes: just bare thought\n with no delimiters at all

vLLM has --reasoning-parser flags per model which helps but that's basically just the vLLM maintainers volunteering to play whack-a-mole forever. And if you're doing anything downstream with the raw output you're still writing your own parser per model.

We just went through this with chat templates. Now we're doing it again.

Is this just Google being Google? Anyone seen any actual movement toward standardizing this or are we just vibing?

submitted by /u/ahinkle
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top