The Repugnant Lifespan Conclusion

Certainty: Speculative moral philosophy. So, who knows! It's mostly unanswered questions anyways.

Which would you choose, if you had to?
1. Alice is born, and lives a happy life for 80 years
2. Alice and Bob are both born, and live equally happy lives for 79 years.

Intuitively, the second seems more appealing. A year of Alice's life is surely worth Bob's existence. But if you keep making choices like that, then you'd prefer that Alice, Bob, and Charlie are all born and live for 78 years. You can iterate, reducing how long everyone lives while increasing how many people there are... but then at some point you'd prefer a population where each person lives for (say) 1 minute (or one computation step[1]) over a single person living a full life.

Surely this is silly? This is basically the repugnant conclusion applied to lifespans instead of happiness, but perhaps you bite that bullet but reject this one.

Now consider what I'm currently calling the "FrankenWorm":[2] imagine a mind that simulates one minute of Alice, then one minute of Bob, then one minute of Charlie, etc. forever - never repeating someone twice.[3]

Surely the worm has less moral significance than a normal person? This feels especially true if it only runs a single computation step of each person.

image.png

Would you still love me if I was a FrankenWorm?

Should we somehow treat the FrankenWorm differently from a population of people who all live short lives at once? I don't think so, but maybe you do - for example, perhaps some way of caring about continuity of experience would see the FrankenWorm as worse, due to constantly breaking the continuity in a way that a population of humans doesn't.

If you're hung up on creation here (for example if you take the person-affecting view that it is not morally good to create people because nonexistence isn't bad "for anyone", as there isn't an "anyone"), you could alternatively imagine that Alice and Bob are both currently alive, but Alice has to spend a year of her lifespan to save a teenage Bob from a runaway trolley. Surely it's good for her to do so? But then, if Alice and Bob both spend a life-year to save Charlie, and Alice, Bob and Charlie spend a life-year to save Dave, and so on, we get the same problem. Likewise, we could compare saving Alice, Bob and Charlie to saving the FrankenWorm.


Before thinking of examples like this, my previous main approximation of my values was that each 'experience-moment' gets some value based on whether it's happy or sad, and then you add up all the experience-moments to see how good a universe-history is. For example, I'd rather be happy for 5 minutes than 1 minute, and rather be happy for 1 minute on both Sunday and Saturday than for only 1 minute on Sunday.[4]

Now, I think it's perhaps worse (or at least there's more foregone good) to destroy a person, than the good it is to create a new one. For example, if I kill Charlie and replace him with someone else, then I have made the world worse.

It's surely still pretty good, at least in some cases, to create people even if they'll one day die. I still feel this way even if I know that they'll die soon. But all the FrankenWorm does is create happy people who'll merely die soon afterwards. So what's the problem?


For another piece of weirdness: Let's take for granted my position that I have no problem being replaced by a close enough clone, or being destructively teleported. It seems like I care more about there being a me, somewhere, than I do about there being extra me's. That is, while I would prefer having a clone over just the one Xela, I would go to much greater lengths to prevent there from being zero Xela[5] in the universe. Similarly, while I would prefer to be alive for the next 60[6] years, over just being alive for all of those years aside from this one, I would go to much greater lengths to ensure that I get 'revived' eventually[7] than I would to make sure that I don't end up in a coma for one year. I feel this way towards others, too.

Additionally, I would go to relatively similar lengths to make sure there are 3 clones instead of 2.

However, if our universe is 'big' enough, then it'll likely to contain many instances of any particular person - so then, should I value me and my clone on Earth like I would the lives of 2 out of of my clones, and thus see my death as just like the death of a clone?

Lastly, the whole notion of "counting people" is suspect - if a brain becomes twice as thick, are there now twice as many people in it?? This would throw everything mentioned previously into question!


I feign no hypotheses about what to do here. Some see this as justification average utilitarianism, but that seems too contrary to my intuition: how can it be bad to create a happy person just because of how happy others are! Suppose Omega told me that the only life out there in the universe[8] was a glorious transhumanist civilization in Alpha Centauri that miraculously had a copy of everyone on Earth. Surely I shouldn't then stop wanting to live, or want to destroy the world in hopes of raising average utility??[9]

  1. ^

    Probably it's discrete?

  2. ^

    It's a "worm" because if you imagined Alice, Bob, Charlie, etc. as paths through spacetime, they would each look like "worms". The FrankenWorm is then a diced up version of each of their worms, best served with garlic and a side of marinara sauce.

  3. ^

    This is basically the time-like version of the space-like lifespan repugnant conclusion, if you get what I mean.

  4. ^

    Here I'm taking the hedonic-utilitarian view. However, I don't think I'm in favor of wireheading, and most ways to account for that seem to require looking at 'the whole worm'. That is, not just a sum/integral of a function of the current moment/local stretch of time, but instead moments at different times can 'interact' - for example, perhaps experiences have to be non-repetitive.

    I still think I'm mostly a hedonic-utilitarian.

  5. ^

    It's somewhat funnier for the plural of Xela to be Xela, and thus, it shall be.

  6. ^

    and more, of course! Here we're pretending we'll get a normal future, with FrankenWorms and clones but not antiagathics.

    ...still less strange than what it's actually looking like it'll be.

  7. ^

    With some confusion about how long I need to live afterwards for it to be worth it.

  8. ^

    Aside from Earthlings, of course.

  9. ^

    If you're a negative utilitarian, you may in fact wish the planet was lifeless. For this example, pretend that everyone here lived a life worth living according to you.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top