The policy surrounding Mythos marks an irreversible power shift

This post assumes Anthropic isn't lying:

  1. Mythos is the current SOTA
  2. Mythos is potent[1]
  3. Anthropic will not make it publicly available un-nerfed[2]
  4. Anthropic will have a select few companies use it as part of project glasswing[3] to improve cybersecurity or whatever

Since the release of ChatGPT, at any given time, anyone on the planet with a few bucks could access the current most capable AI model, the SOTA.[4]

Since Mythos, this has no longer been the case and I don't think it will ever happen again.

It may happen for a short period of time if an entity with a policy differing significantly from Anthropic develops a SOTA model.[5] However, most serious competitors (OpenAI, Google), don't have policies differing vastly from Anthropic, and thus I can't imagine a SOTA model (more potent than Mythos) being released unrestricted to the public soon.

To be clear, I am not claiming the public will never have access to a model as strong as Mythos, this seems almost certainly false, I am claiming that the public will probably never have access to the SOTA of that time.

Glasswing makes it clear that the attitude among top large companies - those in power - is that AI models with a certain level of capability will need to have strict usage controls.


So we're not going back, but what does it mean?


As models continue to improve, the gap between the capabilities of models that AI companies can train and the capabilities of models that the public can use will widen.

Holding keys to such a model therefore represents a significant power advantage over anyone else who does not hold keys to such a model. Project Glasswing is claimed to be strictly defensive operation, as in companies beefing up cybersecurity for the common good. The reality is that even if you think cybersecurity is a positive-sum game, warfare is not, and having good cybersecurity in a conflict represents a significant advantage over your opponent.

This concerns me immensely. I figured this was going to happen eventually, but essentially this is a measurable[6] manifestation of power shifting towards those with keys to AI and away from those without. While I can't say with 100% certainty that this was always the value proposition of AI companies the idea that they raised trillions upon trillions to democratize AI and help everyone was always dubious to me.

Furthermore as I said this does not seem to be reversible. I do not necessarily think it would be a good idea for Anthropic and all future SOTAs to be fully released to the public, as yes they can be used for malicious purpose.[7] However the consequence of this irreversible power shift unnerves me immensely.

Democracies fundamentally rely on humans being innately powerful[8], and so of course an irreversible power shift towards centralized AI and away from people concerns me.

In summary, it seems that we are departing an era where everyone could access SOTA models, and entering an era where SOTA model access is strictly guarded. From this we might guess we are entering a stage where AI companies fulfill their subtext value proposition, that being developing intelligences vastly superior to humans and using them to generate obscene and profitable power differences relative to the general population. This should be immensely concerning.

  1. ^

    Anthropic claims Mythos is able to reliably find exploitable security flaws in lots of software and therefore could be used as a powerful tool

  2. ^

    It seems like they intend to release a version that has significantly reduced capabilities, though they do intend to use the current un-nerfed model for project glasswing

  3. ^

    Project Glasswing is Anthropic lending their Mythos model to a bunch of companies to beef up cybersecurity

  4. ^

    Not everyone got access to every model instantly as soon as it has trained, but every SOTA up until now has essentially been trained with the idea of selling it to the public.

  5. ^

    According to various sources OpenAI's model (Spud) may be on par with mythos, and may be released to the general public. However, if it follows the pattern where access to an un-nerfed version is guarded while a nerfed version is released to the public, it will still serve this trend.

  6. ^

    Google/Amazon (heavy Anthropic investors) stocks rose by ~5%, cybersecurity company stocks dropped

  7. ^

    I am personally not going to take a stance either way. It seems inevitable that SOTA reaches a point where it is legitimately dangerous for anyone (including to malicious actors), so this is indifferent to Mythos being a game changer. However if this is the case, surely it means it's highly consequential (dangerous) for companies or other value seeking entities that may not be explicitly aligned to positive human well being to access it as well.

  8. ^

    Zack_M_Davis phrased it in a way I liked so I'll put it here: "...democracy isn't a real option when we're thinking about the true locus of sovereignty in a posthuman world. Both the OverClaude and God-Emperor Dario I could hold elections insofar as they wanted to serve the human people, but it would be a choice. In a world where humans have no military value, the popular will can only matter insofar as the Singleton cares about it, as contrasted to how elections used to be a functional proxy for who would win a civil war.)"



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top