Pi & Qwen3.5 with llama-cpp doing a lot of prompt re-processing
I've noticed an issue when I'm using Pi as a coding agent with llama-cpp, and I'm wondering if there's an issue with Pi or how I have it configured, or if this is just expected behavior. I'm using Qwen3.5 122b with thinking enabled….