What should we take from Anthropic’s (possibly) terrifying new report on Mythos?

Everybody’s talking about Anthropic’s new but unreleased model Mythos (and the related Project Glasswing) and how it might undermine or even devastate cybersecurity. There are a lot of ostensibly terrifying reports about Mythos, like this one:

Tom Friedman panicked about it in the NYT, too.

Since I haven’t gotten to play with Mythos (which has not been public released), there’s a lot I don’t know, but here are a few bets:

  1. It’s probably not as bad as they say, as AI and cybersecurity expert Heidy Khlaaf explains in this thread, which I highly recommend:

    As a cybersecurity friend said to me in a text: “I haven’t had time to go through all of the write-ups yet, but it smells overhyped to me. Oh, we have this powerful model, but you can’t evaluate it yourself. I don’t doubt some of the results, but what’s made of it and the conditions under which the vulnerabilities were found, and what role humans played, isn’t clear… To me, the conditions and the scenarios are what matter. My gut tells me that if they released it publicly, we’d have some advancements but far from the exponential benefits they seem to be implying. It seems to me they are planting seeds in the hype garden.”

  2. Whether or not Mythos is AGI per se is a red herring. (It probably isn’t; it’s telling that the report says very little about overall capabilities.) Crucially, AI doesn’t need to be AGI to cause harm! ChatGPT can’t reliably run a timer but it has still been implicated in delusions, suicides, cognitive surrender, mass disinformation, and so much more. Mythos may wreak havoc even if it hallucinates and lacks reliability outside of domains like coding and math. A system doesn’t have to be AGI to carry risks.

  3. The strongest lesson here is about policy. Anthropic showed some admirable restraint in not publicly releasing a potentially dangerous technology.1 But some of their competitors (such as OpenAI and xAI) might well not. Whether Mythos is as scary as it sounds or not, the reality is that without any government oversight on what can be released, we are entirely at the mercy of individual CEOs, some of whom have decidedly not earned our trust.

  4. Corollary, as pointed out to me by a friend who read a draft of this essay, “it is impossible to disentangle real concerns from fear mongering being used as a marketing strategy, and so it just is not possible to separate justified panic from mere advertising which is why we need government oversight!”

  5. What about China? We need an international agency and treaty. That’s what I have been arguing all along. That was the point of my 2023 TED talk and 2023 invited essay with Anka Reuel in The Economist. We still do. Mythos (and the reporting around Altman) only make that clearer. Self-regulation is too little, too late.

The situation is this: three years of self-serving and misleading arguments about how regulation would allegedly preclude innovation has left us up shit’s creek, without a paddle.

I can’t tell exactly how far Mythos (or similar competitive systems that will no doubt follow) leave us up that creek, but I do know that the time for crafting paddles is running out.

Subscribe now

PS. It is worth rereading this warning I issued in January 2025, when asked by Politico for a black swan prediction:

1

Anthropic also announced a big consortium, partly supported by their financial support, to work on cybersecurity, which is likely a good thing, though one that should work in conjunction with law enforcement and regulation.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top