- Qwen/DeepSeek:
<think>...</think> - Gemma:
<|channel>...<channel|>Ok weird but sure. - Gemma again, sometimes: just bare
thought\nwith no delimiters at all
vLLM has --reasoning-parser flags per model which helps but that's basically just the vLLM maintainers volunteering to play whack-a-mole forever. And if you're doing anything downstream with the raw output you're still writing your own parser per model.
We just went through this with chat templates. Now we're doing it again.
Is this just Google being Google? Anyone seen any actual movement toward standardizing this or are we just vibing?
[link] [comments]