I've seen several people recommend disabling thinking for models when used in agent encoding, but I haven't been able to find any reasoning behind it.
Could you please share details on this topic?
[link] [comments]
I've seen several people recommend disabling thinking for models when used in agent encoding, but I haven't been able to find any reasoning behind it.
Could you please share details on this topic?