LocalLLaMA

DeepSeek 3.2 eating the opening think tag on llama.cpp server?

Hey guys. Having a weird issue with the new DeepSeek V3.2 Unsloth GGUF via llama-server. The model starts reasoning fine, but the actual opening think tag is missing from the output stream. I just see the plain text reasoning, and then the closing tag …