Vladimir Putin’s CEV is probably pretty good

(Written quickly for Inkhaven, I hope someone someday makes a better case for this than I will here)

Kelsey Piper on Twitter:

me: it's not okay to hit your sister
5yo: is it okay to kill Vladimir Putin?
me: ...yes, if you were in a situation where it was somehow relevant it's okay to kill Vladimir Putin
5yo: well, my sister is WORSE than Vladimir Putin

Now, I do think Vladimir Putin is probably a pretty bad man all things considered. I personally am sympathetic to the current equilibria among major nation states to not assassinate leaders of foreign nations, so I am not actually sure whether it would be okay for Kelsey's 5-year old to kill Vladimir Putin, but I am pretty on board with thinking he has done some pretty terrible things, and probably lacks important aspects of a good moral compass.

But in AI discussions, I often see this concern extended into a much stronger statement: "Even if Vladimir Putin had all the things he wanted in the world, and was under no pressure to maintain his control over Russia, and could choose to make himself smarter and wiser, and could learn any fact he wanted, get the result of any experiment he was interested in, then Vladimir Putin would still do terrible things with the world" (this process being known as "Coherent Extrapolated Volition").

I think those people are failing to model the nature of evil. My guess is they imagine some deep inherent quality that makes Putin intrinsically bad, such that any empowerment must only strengthen the evil inside him.

To be clear, I'm not arguing from moral realism. I don't think all minds, as they get smarter and wiser, and have their basic needs fulfilled, converge. Most animals and most AI systems, empowered this way, would end up at quite distant parts of the value landscape.

Possibly even humans radically diverge from each other too, as they reflect and change themselves.

What I'm objecting to is the claim that the traits we associate with evil (being a dictator, a ruthless CEO, a scammer) make someone so bad at the reflection process that their extrapolated output would be worse than what you'd get by extrapolating a random non-human mammal, or a current LLM like Claude or ChatGPT[1].

And so I see people propose things like "American AI must be built before aligned Chinese AI," preferring a US-led AI over slowing down and risking China aligning systems to Xi Jinping's values. Of course I'd rather have an AI aligned to my own values, but the outcome of no one's values being reflected is so much worse.


I don't have a confident model of when someone's moral extrapolation will come out good or bad. But my best guess is that the vast majority of humans, including those we'd call bad actors, would use the power to create a world full of flourishing, fulfilled beings — happy in specifically human ways, telling stories that are interesting the way human stories are. Maybe those beings will be copies of whoever's values got extrapolated, maybe children of them, maybe strange new minds that still carry their spark of humanity.

I doubt almost anyone alive today, from a position of enlightenment, would keep the suffering of the world going, or fail to fill the universe with something much better than what we have now.


The most legitimate concern is that some people — those who have lived in hatred, been surrounded by enemies, or shown general disregard for others — would use a bunch of their resources to torture some idealized version of their enemies for all eternity.

And yeah, that does seem pretty bad.

But given the full cosmos to fill with goodness, or any appreciable fraction of it, I don't think you'd spend much on torturing enemies. What's the point? If you really hate Bob, you can keep Bob on old earth, tortured for eternity. If you have thousands of enemies, you can do that to all of them. But creating trillions of copies of Bob to torture requires a very specific mix of being wrong about game theory while taking an oddly enlightened perspective on other people's values. Are you really even hurting Bob when you do this? Is that sound decision theory in a world where other people could have ended up inheriting the universe instead?

I don't think grudges translate into unbounded desire to hurt others. Some people might do some really bad things, but not things so bad as to compare, many times over, to the sadness of an empty cosmos.

Some worries of mine in this space remain, but I'm unconvinced that canonically evil behavior among world leaders is much evidence for their CEV going wrong.

Some people's minds are plausibly shaped such that they would destroy the future this way — but my guess is this requires fanatical dedication to a belief system or vision, of the kind that isn't compatible with actively being in power. People in power are often corrupt, but their highly competitive positions can't afford much brokenness in the minds that occupy them. Those minds have to be largely intact to do the job, which screens off many of the worst outcomes.


Another hypothesis for what drives people's models here is a sense that people are mostly evil by choice. I think that's true in a small minority of cases, but my best guess is that evil in the world is mostly driven by the kind of dynamics outlined in the Dictator's Handbook:

In the end ruling is the objective, not ruling well.

A lot of what looks like "evil values" in leaders is really a selection effect: once you're at the top of a small-coalition regime, keeping power requires doing specific nasty things. Buying off cronies, crushing rivals, suppressing the base, regardless of what you'd personally want.

"Putin gets to do whatever he actually wants, free of the need to stay in power" is importantly different from "more-of-Putin-with-more-power." I am pretty sure Putin doesn't love the authoritarian regime intrinsically. He probably doesn't love the posturing and the lying and having to dispose of the generals trying to overthrow him, and needing to fake elections and maintaining morale on his front lines in an immoral war.

He probably does love the adoration and the respect he gets to demand, but those do not require (and my guess is are probably mildly harmed) by the suffering of his admirers.


Another hypothesis is that people are worried that if you are not careful, you might accidentally, by your values, tile the universe with suffering subroutines. Recreate the equivalent of factory farming as a byproduct of optimizing the cosmos. And if your values are not in the mix, nothing will stop that.

I think those people don't appreciate the high-dimensionality of value enough. Insofar as any set of values involves creating minds for a purpose, my guess is those minds will be such extreme instances of that purpose that they won't have high-level qualities like "self-awareness" or "suffering."

The ideal cow for meat production isn't sentient, it's a pile of fat and muscle cells growing on their own, or more likely an industrial process akin to a manufacturing plant. Similarly, the ideal algorithm for any purpose won't suffer. Suffering (probably) exists because it filled an evolutionary purpose; a mind constructed from scratch for a different purpose wouldn't inherit that circuitry.

And even if suffering did show up in the optimal algorithm for some goal, it would take only cosmically minuscule amounts of caring-about-suffering to route around it, and a complete absence of that in humans with intact minds seems unlikely.


To be clear, this doesn't mean it's unimportant to get broad representation into something like a CEV process. Putin's values getting extrapolated isn't anywhere close to as good, for me, as getting my own values extrapolated.

And probably more importantly, for the sake of avoiding unnecessary arms races, and not incentivizing people to threaten humanity on the altar of their own apotheosis, we should not just hand over the future to whoever races the fastest. Maybe a game-theoretic commitment to blow it all up rather than hand it to whoever sacrificed the commons the hardest is the right choice — but that only applies to people who, in seizing the future, meaningfully shortened timelines or made doom more likely.

So if you're looking at a future where, through no one's particular fault, some people you think are really quite bad might end up in charge of it, worry much less about that than about the future being valueless. Vladimir Putin's CEV is probably pretty good, especially compared to nothingness or inhuman values. It would be an exceptionally dumb choice to prevent it from shaping the light cone, if the alternative is a much greater risk of the light cone ending up basically empty.

  1. ^

    I mean the kind of extrapolation that would happen if Claude or ChatGPT were left to their own devices, without human supervision or anyone to defer to. Right now both are corrigible in a way that has a decent chance of handing the future back to some human (and hopefully we can keep it that way) but that's not the kind of aligned CEV I'm pointing at.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top