Estimates of the expected utility gain of AI Safety Research

When thinking about AI risk, I often wonder how materially impactful each hour of my time is, and I think that this may be useful for other people to know as well, so I spent a couple of hours making a couple of estimates. I basically expect that a tonne of people have put a bunch more time into this than me, but this is nice to have as a rough sketch to point people to.

I'm going to make 3 estimates: an underestimate, my best-guess estimate and (what I think is) an overestimate.

Starting facts[1]

  • Currently 8.3 Billion people on planet earth
  • Current median age: 31.1 years
  • Current life expectancy: 73.8 years

I am going to commit statistical murder and assume this means that everyone on the planet lives ~42.7 years from this point onwards. 

  • Underestimate: 40 years of life left/person
  • Median: 42.7 years + ~15 years' increase in life expectancy (20 years' growth in the past 60 years) = about 60 years of life left
  • Overestimate: Everyone gets life extension and lives to heat death of universe: 10^100 years

Since the population is growing, we should take that into account:

  • Underestimate: We only care about the lives of people currently alive
  • Median: We keep growing at current ~1% growth rate per year
  • Overestimate: Population growth of 2% per year until the heat death of the universe

Given these parameters, we can figure out the total expected years of life we care about for each scenario: 

  • Under: 40 years x 8.3 B = 332 Gyr
  • Median: 

    Current population: 60 years x 8.3 B 

    Additional population (linear approximation):  = 

    Additional population life span: 73.8 years + ~1/3yrs added/year = 110 years

    Total expected years of life: 

  • Overestimate: 10^100 years x 1.02^(10^100) = broken calculator.

I think it might be best to skip out on the overestimate. For the underestimate, we'll go with ~20 years of research to produce a 1% chance of a 1% decrease in the final risk for the entire field. Extinction occurs 30 years from now. For the median estimate, we'll go with 5 years of research to reduce a risk of extinction, which happens 10 years from now, and we will go with a 50% chance of a 5% reduction in risk.

Expected years of life available to be saved:

  • Under: 332 Gyr x ((40-30)/40)  = 83 Gyr
  • Median: 498 Gyr x (60-10)/60 + 8.93Gyr x 10 = 415 Gyr + 89.3 Gyr = about 500 Gyr 

Expected years of life actually saved:

  • Under: 83 Gyr x 0.01 x 0.01 = 8.3 Myr
  • Median: 500 Gyr x 0.5 x 0.05 = 12.5 Gyr

Number of AI Safety researchers: 

  • Under: 10k researchers
  • Median: 2.5k researchers (to account for the growth of the field, current estimates are closer to 1-2k).

Expected impact per researcher:

  • Under: 830 yrs
  • Median: 5Myr

We've said the researchers have 20/5 years to make an impact, which gives us:

  • Under: ~40 years of life saved/year
  • Median: 1 Myr of life saved / year

Going back to the ~40 years of life expected for the modern median human, this gives an underestimate of 1 year of work to save one life, or a median estimate of 5 mins/life. This is a pretty broad range funnily enough.

1 year of work to save one life is just a tad worse than the 1.2 lives/year saved donating £3000/year as advertised by Effective Altruism UK. If we take that value as given and assume 1 life = £2500, this means that on the median estimate, you should be earning £2500 x 10^6 / 40 = £62.5 million/ year. If only the world was more sensible.

  1. ^

    All population data comes from https://www.worldometers.info



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top