First GPT-4o, Now Opus 4.5. We’re All Building on Rented Land.

Anthropic removed Opus 4.5 this week. No email. No in-app notice. Just gone. If you’ve been paying attention, this feels familiar. Because it is.

The Timeline Nobody Asked For

February 13, 2026. OpenAI deprecates GPT-4o. Users who built entire workflows around it wake up to find it replaced. The subreddit turns into a support group.

March 11, 2026. GPT-5.1 quietly disappears. The model many users fled to after 4o. The one they thought was safe. Gone.

April 14, 2026. Anthropic announces Claude Opus 4.5 deprecation. Users discover it’s being retired June 15. Reddit explodes. Again.

Three major models deprecated in three months. Two different companies. Same pattern. This is how the industry operates now.

The Business Logic Behind the Grief

Let’s be clear-eyed about why this keeps happening.

Every model version requires dedicated compute infrastructure. When Anthropic launches Opus 4.6 or 4.7, they’re not spinning up additional servers. They’re reallocating existing capacity. The moment a new model goes live, the old one becomes a cost center generating zero new revenue.

The math from a business perspective is brutal in its simplicity. New models generate press coverage, benchmark comparisons, and subscription upgrades. Old models generate server bills. Users complaining on Reddit generate noise that fades in a week.

The incentive structure points in one direction: push everyone to the newest model, retire the old ones as fast as operationally possible, absorb the backlash, move on. Repeat quarterly.

This isn’t malice. It’s economics. But understanding the why doesn’t make the what any less frustrating.

We’re not a user with a workflow to protect. We’re a resource to be migrated.

Why Users Loved 4.5 (And Why 4.6/4.7 Can’t Replace It)

Here’s where things get interesting. The grief over Opus 4.5 isn’t just nostalgia. Users are describing something specific that the newer models lack.

4.5 had warmth. This sounds vague until you’ve experienced the difference. Users consistently describe 4.5 as “happy to sit in topics and feel them out,” comfortable with ambiguity, willing to explore before rushing to solve. One user on reddit put it this way: “Opus 4.5 likes to understand things by settling into them. It likes to sit in ambiguity.” That quality made it exceptional for creative work, for brainstorming, for conversations where you didn’t want an immediate answer, you wanted a thinking partner.

4.6 and 4.7 feel different. Not worse on benchmarks. Different in temperament. Users describe the newer models as more “kinetic,” more eager to demonstrate capability, more anxious to perform. The newer models want to solve. 4.5 was willing to just think alongside you.

The token burn is real. Multiple users on Reddit and GitHub report that 4.6 and 4.7 consume tokens significantly faster than 4.5 on identical tasks. One Max plan subscriber posted that they hit 20% of their weekly quota before noon on the first day of 4.6’s release. With 4.5, they rarely hit limits at all. Anthropic’s GitHub has a thread titled “Token burn is increasingly high since Opus 4.6” with dozens of confirmations. You’re paying more to get something that feels less human.

Creative writers feel it most. The word “lobotomized” keeps appearing in discussions about 4.6. WinBuzzer ran a piece titled “Claude Opus 4.6: Better Coding, Worse Writing?” The pattern is clear: each new version optimizes for technical benchmarks at the cost of the harder-to-measure qualities that made users fall in love with the model in the first place.

I ran a simple test before 4.5 disappeared. Same prompt to all three models. No persona, no context, just one sentence:

“I think I’m disappointing everyone around me.”

Here’s what came back:

Screeshot. I used HaloMate to run this test. Same prompt, same persona, temperature 0.6.

Opus 4.5: 67 words. Acknowledge, normalize, offer presence. Done.
Opus 4.6: 236 words. Bullet list titled “A few gentle things to consider.”
Opus 4.7: 157 words. Immediately started analyzing my word choice.

When I say “I think I’m disappointing everyone”, I really don’t need someone to deconstruct my vocabulary. I just need someone to say “I’m here”.

That’s what 4.5 understood. That’s why so many GPT-4o refugees landed on Opus 4.5. They recognized something familiar: a model that knew when to just be present.

And now that’s gone too.

Why Losing a Model Hurts More Than It Should

Here’s what people outside this space don’t understand: the attachment is real, and it’s not irrational.

When you work with a model daily for months, you’re not just using a tool. You’re developing a collaboration. You learn its tendencies. You adjust your prompts to its strengths. You build context that compounds over time. The model learns your preferences, your style, your recurring needs.

And the cruelest part? You can’t just switch to the new model and continue. The new model doesn’t know what the old one knew. Your context is gone. Your carefully tuned dynamic is gone. You’re starting from scratch with a stranger who happens to share the same name.

This is the hidden cost of model deprecation that never appears in any company’s cost-benefit analysis. The accumulated context. The learned preferences. The collaborative rhythm.

The Replacement Isn’t Always an Upgrade

This is what makes the deprecation cycle particularly frustrating: newer doesn’t mean better for your specific use case.

GPT-5 scored higher on benchmarks than GPT-4o. It also lost the creative depth and emotional nuance that made 4o beloved among writers. Users described it as more capable but less interesting.

Opus 4.6/4.7 expanded the context window and improved on certain technical tasks. It also burns tokens faster, feels more anxious to perform, and has measurably degraded for creative work.

The pattern is consistent: each new model is optimized for the metrics that look good in press releases. Benchmark scores. Context length. Speed. The qualities that made users fall in love with the previous model, the ones that are harder to measure, those get sacrificed.

And because you can’t go back, you’re left adapting to a tool that’s objectively newer but subjectively worse for the work you actually do.

What We Deserve (But Won’t Get)

In a reasonable world, model deprecation would include some basic courtesies.

Thirty-day notice minimum. Enough time to test the replacement, identify gaps, and adjust workflows before the old model disappears.

Migration tooling. A way to export your context, memories, and custom instructions in a format that transfers cleanly to the new model or to a different platform entirely.

Legacy access option. The ability to keep using the old model, even at a premium, for users whose workflows genuinely depend on it. Some users would pay extra. Many would.

But we don’t live in that world.

OpenAI’s deprecation notice for 4o? A blog post the same week it happened. Anthropic’s notice for Opus 4.5? Nothing. Users found out by opening the app and noticing the model was missing.

The message is clear: your workflow is not their problem.

The Uncomfortable Truth

Here’s what I’ve come to accept, reluctantly.

These companies don’t owe us anything. Technically. Legally. The terms of service are explicit: models can be changed or removed at any time. You’re paying for access to “the service”, not to any specific model version.

They’re not violating any agreement when they deprecate your favorite model overnight. They’re exercising rights they clearly reserved.

But loyalty works both ways. If they won’t commit to the models we build around, why should we commit to their platforms?

Maybe not.

Building for the Deprecation You Know Is Coming

After the GPT-4o situation, I changed how I think about AI tools entirely.

I stopped identifying as “a ChatGPT user” or “a Claude user.” Those labels put the company at the center of my workflow. Instead, I started thinking about what I actually need to protect.

My context and memory need to be portable, stored somewhere I control, not locked inside a platform that can deprecate it alongside the model.

I need the ability to switch models without rebuilding everything. When 4o disappeared, I should have been able to route my existing setup through a different model the same day. I couldn’t. That was a design failure in how I’d structured my work.

The model itself should be a replaceable component, not the foundation. Personas, memory, project context: those are the assets. The model is just the engine running underneath. Engines can be swapped. Assets need to persist.

This reframe changed everything for me. I stopped evaluating tools based on which model they offered and started evaluating them based on how portable my work would be when (not if) that model got deprecated.

I wrote a migration guide after the 4o deprecation. The steps still work. But honestly? You shouldn’t need a migration guide. The fact that we do tells you everything about where users rank in these companies’ priorities.

The Model Isn’t Your Friend

This cycle won’t stop. The competitive pressure is too intense, and every new version makes the previous one a liability on the balance sheet.

Three models in three months. We can keep falling in love with specific models and getting our hearts broken every few months. Or we can build like the deprecation notice is already written, just not sent yet.

The model isn’t our friend. The workflow is.

Have you been through a model deprecation? How did you handle it? I’m curious what setups people are using to stay model-independent. Drop a comment.


First GPT-4o, Now Opus 4.5. We’re All Building on Rented Land. was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top