I’m starting to think the “model improvements” we keep hearing about aren’t really improvements but resets. Specifically, it feels like Claude (and probably others) gradually reduces performance after a launch, just enough that you don’t notice day-to-day. Then when a “new” model drops, it feels like a leap, when in reality it might just be the removal of those constraints.
Right after release, the model is sharp, faultless, and genuinely good . It connects ideas, picks up errors and then resolves them. But over time, that deteriorates. Responses get shorter, more generic, and less attentive. It starts missing context it would have easily caught before, and there’s a subtle shift toward disengaging faster.
At first you assume it’s you. But comparing old conversations makes the change obvious. The same inputs used to produce much richer outputs.
There are clear incentives for this. Full performance is expensive, and consistent quality makes it harder to show progress. But if the current model is quietly toned down, restoring it can look like innovation.
So, enjoy Opus 4.7 and GPT 5.5, they will be brilliant for a few weeks.
[link] [comments]