Companies have learned to say “AI.” Now someone has to ask for proof.

The modern layoff memo has developed a new organ.
It used to have the standard parts: regret, gratitude, market conditions, difficult decisions, severance details, and a paragraph about focus. Boilerplate built for lawyers and LinkedIn.
Now it has an AI section.
A few lines where the company stops sounding like an employer and starts sounding like a machine redesigning itself. Teams will be smaller. Managers will become player-coaches. Individual contributors will be amplified by agents. Work that once took weeks will happen in days. The future, we are told, requires fewer people around the table.
The memo does not say: we missed revenue expectations.
It does not say: investors want margin.
It does not say: the org chart got too expensive.
It says: AI changed what is possible.
Maybe it did.
But if AI is going to be named as the cause of a layoff, it should have to stand in the room and show its work.
The wrong question is “Will AI kill jobs?”
The public argument keeps collapsing into a cartoon.
On one side: AI will eliminate knowledge work, hollow out the middle class, and turn white-collar labor into a historical footnote.
On the other: every technological revolution creates more work than it destroys, so anyone worried about displacement is committing the old lump-of-labor fallacy in new clothes.
Both sides have evidence. Both sides also have incentives.
The panic side sees real compression: customer support, clerical work, junior coding tasks, copywriting, research synthesis, design iteration, QA, analysis, and coordination. The machine is not replacing a job so much as eating pieces of many jobs at once.
The optimistic side has history.
Tractors did not stop working. Electricity did not end work. Spreadsheets did not end finance. The internet did not end commerce. Every previous technology closed one room and opened another. Workers left farms for factories, factories for offices, offices for software, software for services, services for platforms.
That is the strongest argument against the AI job apocalypse.
But AI adds a new anxiety to the old pattern.
Previous technologies opened the next door. AI might be able to walk through it.
The tractor did not become a logistics planner. Electricity did not write memos. Excel did not pitch itself as the analyst, the manager, and the intern at the same time. The internet created new markets, but it did not read the job posting, apply for the role, generate the strategy deck, summarize the meeting, and write the code.
AI reaches into the layer humans usually escaped into after the last machine arrived: language, coordination, judgment, synthesis, planning, and software itself.
Andrej Karpathy’s “Software 3.0” framing captures part of this shift: prompts start to behave like programs, and natural language becomes an interface for making systems act.
That does not prove mass unemployment is coming.
It means the historical analogy needs an upgrade.
The question is no longer only: what new doors will technology open?
The question is: when the next door opens, how much of the room is still reserved for humans?
Coinbase is the cleanest doorway into the new language
Coinbase gave the discourse its latest test case.
On May 5, Reuters reported that Coinbase would cut about 700 jobs, roughly 14% of its global workforce, as it trimmed costs amid crypto-market volatility and repositioned the business for the AI era. The company expected $50 million to $60 million in restructuring charges, mostly tied to severance and other employee benefits.
What matters is not only the number.
It is the explanation.
In Coinbase’s own post, Brian Armstrong described two forces converging: the market and AI. The market was volatile. The company needed to adjust its cost structure. But AI, he wrote, had changed how quickly small teams could work. Engineers were using AI to ship in days what used to take a team weeks. Non-technical teams, he wrote, were shipping production code. Workflows were being automated.
Then came the operating model.
Fewer layers. Faster decisions. No pure managers. Leaders as player-coaches. AI-native pods. Experiments with reduced pod sizes, including what Coinbase called “one person teams,” where engineers, designers, and product managers are all in one role.
This is not just a layoff.
It is a theory of the firm.
The old software company was built around teams: product managers, designers, engineers, managers, staff engineers, data analysts, researchers, support layers, operations layers, trust layers. Some of that structure was necessary. Some of it was coordination tax. Some of it was empire-building. Some of it was the residue of a long hiring boom, when every future looked fundable, and every roadmap looked understaffed.
AI gives executives a new instrument for cutting through that mess.
The promise is seductive: one high-context operator plus a fleet of agents. Less handoff. Less waiting. Less process theater. Fewer meetings about the meeting. Fewer managers whose job is to translate strategy into Jira tickets and Jira tickets back into strategy.
Anyone who has worked inside a bloated organization knows why this sounds attractive.
Anyone who has been laid off by one knows why it sounds dangerous.
AI is becoming a permission structure
Here is the darker possibility.
AI may not need to replace a worker to weaken that worker’s position.
It only needs to make the worker seem more replaceable.
That is why AI is such a useful alibi. It sounds inevitable. It sounds strategic. It sounds like physics rather than management.
A company can say “we are cutting costs” and look defensive.
It can say “we are flattening management” and look ruthless.
It can say “we are becoming AI-native” and look visionary.
Same severance packet.
Better story.
Axios recently framed the pattern directly: companies are increasingly blaming AI for job cuts, but the evidence points to a messier mix of automation, cost-cutting, market pressure, and restructuring. Axios also placed Coinbase alongside Block, Pinterest, and Shopify as companies that have tied workforce cuts or restructuring to AI.
That is the core problem.
Not that AI is irrelevant. It clearly is not.
The problem is that AI can be real and still be used as a narrative cover.
A model can automate tasks. A downturn can pressure margins. A CEO can want a flatter org. Investors can reward efficiency. Junior roles can be harder to justify. All of these things can be true at the same time.
The layoff memo, however, prefers a cleaner story.
The machine did it.
The panic story is also too clean
There is a second trap: mistaking every AI-cited layoff for proof that the job apocalypse has already arrived.
The data does not support that.
Not yet.
Goldman Sachs Research says AI’s impact is already being felt in parts of tech, knowledge work, and creative work. But its base case is not instant economy-wide collapse. Goldman estimates that, over a roughly 10-year broad adoption period, 6–7% of workers could be displaced, producing a 0.6 percentage-point increase in unemployment if the transition is gradual. Goldman also says no significant AI-led shift in the whole U.S. employment mix has yet appeared in labor data.
The Atlanta Fed’s March 2026 working paper points in the same direction. Based on survey data from nearly 750 corporate executives, the authors found widespread but uneven AI adoption, positive labor-productivity gains, limited near-term job loss, and compositional shifts in jobs rather than a clean employment collapse.
Yale Budget Lab’s April 2026 tracker found a labor market that still looks more stable than apocalyptic, saying the data so far reflects “stability, not major disruption” at an economy-wide level.
So the “AI already destroyed the labor market” story does not hold.
But neither does “nothing is happening.”
Challenger, Gray & Christmas reported that AI led all stated reasons for job cuts in April for the second consecutive month, with 21,490 announced cuts during the month and 49,135 year-to-date. AI accounted for roughly 16% of all 2026 job-cut plans through April.
That number does not prove causality.
It proves narrative adoption.
Executives are saying the word more often.
And once a word enters the layoff machine, it starts doing work whether or not it explains the whole cut.
The first thing AI eats may be coordination
The popular image of AI displacement is still too robotic.
A machine walks in.
A human walks out.
That is not how this will feel.
It will feel like compression.
Like your job is getting denser.
The PM still exists, but now writes specs, prototypes, workflows, runs research summaries, and ships internal tools — work that used to be distributed across several people.
The designer still exists, but produces ten directions before lunch and is expected to critique, select, and systematize faster than before.
The engineer still exists, but reviews machine-generated code, glues systems together, and is benchmarked against a velocity baseline that no one quite announced.
The manager still exists, but only if they can also build, sell, analyze, debug, or operate.
Pure coordination is no longer enough.
The junior role still exists. But the apprenticeship path gets thinner, because the machine now does a lot of the low-risk, low-judgment work that used to be how beginners learned what they were doing.
This is why the debate feels so slippery.
AI does not always delete the job title.
Sometimes it removes the slow parts, the repeatable parts, the forgiving parts — and leaves the human with a more intense, more ambiguous, higher-output version of the job.
Anthropic’s labor-market research found no systematic increase in unemployment for highly exposed workers since late 2022, while also finding suggestive evidence that hiring of younger workers has slowed in exposed occupations.
That can be good for some workers.
It can be brutal for others.
The worker who already has judgment becomes more powerful.
The worker who was still acquiring judgment loses the ladder.
The new jobs will not look like the old ladders
Stripe gives us a glimpse of the other side of the ledger: not jobs erased by AI, but jobs rebuilt around it.
Business Insider reported that Stripe is hiring a “Forward Deployed AI Accelerator” for its marketing team. The person will embed with marketers and teach them to make AI the default mode of work. The role pays between $132,000 and $198,000, requires five years of experience, and measures success partly by the number of workflows permanently transformed and the number of colleagues who start a task with an AI tool.
That title sounds absurd until you look at what it actually is.
Not a prompt engineer.
Not a copywriter.
Not a traditional marketing manager.
A workflow surgeon.
Someone hired to sit inside a team and change how the team thinks, starts, delegates, reviews, and finishes work. That is the kind of role AI creates when companies do not simply remove labor but redesign the operating system around machine assistance.
And that is the more interesting labor story.
AI may delete some roles, compress others, and create a smaller number of strange, higher-leverage roles whose job is to teach the remaining organization how to become part machine.
This is not pure job creation.
It is migration.
Work moves upward into judgment, orchestration, verification, and workflow design, while the old entry ramps into those skills get narrower.
That is the part neither the optimists nor the doomers handle cleanly.
The optimist says: new doors will open.
The skeptic asks: who gets trained to walk through them?
Ask for the receipts
If a company says AI caused layoffs, the correct response is not automatic disbelief.
It is a measurement.
What exactly changed?
Did support tickets per agent go up?
Did the code shipped per engineer increase?
Did sales cycles shorten?
Did design throughput rise?
Did finance close faster?
Did compliance review more cases with fewer people?
Did product teams launch more features with smaller pods?
Did quality hold?
Did customer satisfaction hold?
Did revenue per employee rise because AI amplified the work, or because the denominator shrank after layoffs?
This is the difference between AI transformation and AI theater.
A serious AI-driven restructuring should be able to answer five questions before it hands anyone a severance packet.
First: what tasks were actually automated?
Not vibes. Not “workflows” in the abstract. Specific tasks, with before and after.
Second: What metric improved before the layoff?
A company claiming AI made workers redundant should show evidence that the work was already happening faster, cheaper, or better.
Third: What non-AI pressures were present?
Market volatility, missed revenue, margin demands, and overhiring — put them on the table.
Fourth: what roles are being hired after the cut?
If a company cuts support workers and hires AI infrastructure engineers, that is not AI ending work. It is work moving.
Fifth: Who absorbs the risk?
If the remaining employees inherit more scope, more ambiguity, and higher output expectations with no additional compensation, the productivity gain may be less magical than the memo suggests.
The receipt test does not assume companies are lying.
It assumes they are editing.
The future of work will be uneven
The strongest version of the AI optimist argument is not foolish.
History does not support permanent mass unemployment as the inevitable consequence of productivity gains. Cheaper cognition could expand markets, create new firms, unlock robotics, accelerate science, increase demand for technical infrastructure, and produce job categories we do not yet have names for.
Goldman’s labor analysis says AI could also create jobs, especially in the buildout of power and data-center infrastructure. It says construction jobs exposed to data-center buildout have increased by 216,000 since 2022, and that roughly 500,000 net new U.S. jobs may be needed to satisfy growing power demand by 2030.
But “new jobs will appear” is not the same as “smooth transition.”
A worker displaced from a customer support role does not instantly become a data-center electrician.
A junior analyst does not instantly become an AI systems designer.
A middle manager does not instantly become a player-coach.
A marketer whose work is compressed by generative tools does not automatically become Stripe’s Forward Deployed AI Accelerator.
Markets may adjust over time.
People live in the meantime.
That is why the “AI creates jobs too” argument can be true and still feel like something is missing.
It skips the body.
It skips the worker who has to retrain.
It skips the person whose old ladder vanished before the new one appeared.
It skips the fact that many new AI roles require exactly the judgment, confidence, and institutional standing that displaced workers were still trying to earn.
The future may contain more work.
That does not guarantee it contains a bridge.
The real story is not an apocalypse. It is bargaining power.
The AI labor story is often told as a technical story.
It is also a powerful story.
When an executive says AI lets smaller teams do more, that is a claim about productivity. It is also a claim about what workers can demand in return.
The worker hears: your output baseline just went up.
The manager hears: your coordination role needs proof.
The junior employee hears: the bottom rungs of the ladder are gone.
The investor hears: margin expansion.
The customer hears: maybe faster products, maybe worse support.
The executive hears: permission.
Permission to restructure.
Permission to flatten.
Permission to cut.
Permission to demand more from the people who remain.
Permission to describe old cost discipline in the language of technological destiny.
That does not make every AI layoff fake.
It makes every AI layoff worth interrogating.
The better question
So here is what to ask the next time a company announces layoffs and invokes AI:
Did AI replace the work — or did AI improve the story?
Sometimes the answer will be both.
That is what makes this moment genuinely difficult. The AI transition is real enough to change work, but vague enough to excuse almost anything. It can augment employees, compress roles, eliminate tasks, create new jobs, justify infrastructure spending, and hand executives a cleaner narrative than “we hired too many people and need better margins.”
The future of work will not be decided by the chatbot alone.
It will be decided in the gap between the demo and the org chart.
Between the productivity claim and the severance email.
Between the task that disappeared and the worker who did not know their job, the work had quietly decomposed into parts.
The productivity claim went out on Monday.
The severance email went out on Friday.
Nobody showed the work in between.
AI Is the New Layoff Alibi was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.