A common thought pattern people seem to fall into when thinking about AI x-risk is approaching the problem as if the risk isn’t real, substantial, and imminent even if they think it is. When thinking this way, it becomes impossible to imagine the natural responses of people to the horror of what is happening with AI.
This sort of thinking might lead one to view a policy like getting rid of advanced AI chips is “too extreme” even though it’s clearly worth it to avoid (e.g.) a 10% chance of human extinction in the next 10 years. It might lead one to favor regulating AI, even though Stopping AI is easier than Regulating it. It might lead one to favor safer approaches to building AI that compromise a lot on competitiveness, out of concern that society will demand a substitute for the AI that they don’t get to have.
But in fact, I think there is likely a very narrow window between “society not being upset enough to do anything substantial to govern AI” and “society being so upset that getting rid of advanced AI chips is viewed as moderate”.
There are a few reasons why I think people are likely to favor stopping AI over other policies, once they are taking the problem seriously.
Concern about the other risks of AI
The discussion about AI is often framed as utopia or dystopia; see, e.g. “The AI Doc: Or How I Became an Apocaloptimist”. A lot of the people most concerned about human extinction from rogue AI think that, unless it kills us, AI is going to be great.
But I think for most people, “AI is going to be super powerful” is already enough cause for concern. “AI is going to take everyone’s jobs, but don’t worry the AI companies are in control of the AI” doesn’t sound very reassuring to most people. I think it’s pretty clear that society isn’t entirely ready for the massive changes that AI can bring. I’m not saying disaster is guaranteed, just that making a general-purpose replacement for humans is something we could use more time to prepare for. I think many people will feel this, intuitively. There are lots of things we can do to try and prepare -- that’s kind of the point of other policies -- but I think more piece-meal / band-aid type approaches will leave us struggling to keep up.
I think another deal-breaker for some proposals might be concerns about mass surveillance and concentration of power. If AI is extremely powerful, then many people will reject proposals that rely on giving governments or other bodies power over AI.
The KISS principle: Keep it Simple, Stupid
I think “Stop AI” is an extremely simple and intuitive idea that people can understand and trust. It seems hard to get the details right with other plans, and many of them seem to rely critically on the details. Get it wrong and you can end up with dangerous AI slipping through the cracks and causing catastrophes, or central authorities having too much power.
I think people will be scared of the immense power of AI, and will have a hard time trusting any system to govern that power. Technocratic solutions that require expert knowledge to understand will be especially hard for society to accept.
A preference for humans remaining relevant
I think a lot of people simply won’t like the idea of humans being rendered obsolete. The idea of such a world will make them uncomfortable and sad. It is incompatible with all of their aspirations, their vision of the future, and the life they hoped for for themselves, their loved ones, their community, and their country.
This could certainly change over time, but I think in the immediate term, a lot of people simply won’t want to give up humanity’s privileged place as the most intelligent species, and the one that matters. Without a significant slowdown, I think it would be difficult to find a way to integrate AI into society that doesn’t threaten all of this, and so people with such preferences will find this whole AI thing is too much, too fast.
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
Discuss