What economists get wrong (and sometimes right!) about AI

Related: https://www.lesswrong.com/posts/rmYj6PTBMm76voYLn/publishing-academic-papers-on-transformative-ai-is-a


Sadly, there's not many of them: probably less than a dozen economists who are taking the transformative AI seriously at all. Partly because they've seen a long history of 'straight lines on graphs' and are skeptical about transformative technologies, but a good chunk of this is doubtless because Acemoglu (who was a giant in the field long before he got a Nobel) published a piece "The Simple Macroeconomics of AI" which used a conservative methods and conservative estimates for the values (by the standards in the CS community ridiculously conservative) to claim that the impact of AI over a decade would be... 0.5% of the economy; i.e. a nothingburger. This one paper probably did more to suffocate the field than a lot of other things: if you're going to say Acemoglu is wrong, you want to be **damn sure** that you're right.


Acemoglu's longtime collaborator Restrepo (who's on Anthropics econ panel!) took the idea of labor replacement seriously, and predicts that human wages will fall to the "compute-equivalent cost" of having an AI do the work; this is probably true. He also offers a (deeply flawed) proof that humans won't be any worse off in this regime. He clearly hasn't taken compute advances seriously enough to actually calculate the equivalent wages; my own estimate is that by 2029 they'll be below the rice-subsistence price (meaning that a full day's work won't buy a day's worth of rice).


But by this point, the cracks in the dam are showing, and most of economists are starting to accept that it's going to be bigger than Acemoglu claimed. Say what else you like, they are persuaded by data.


One aspect where the economists are probably right is that even an intelligence explosion will take a while to really impact most of the economy. While autists such as ourselves live at a computer terminal, there's a huge fraction of society that depends on **physical stuff**, and until the robots take over, we'll still need lots of humans doing things for other humans. Kording & Marinescu are a computer scientist & economist duo who tackled this divergence head-on, and pointed out that "no mattter how smart you are, there's only so many ways to stack a pile of books". Their model is one of the only ones I've seen that takes the impact on wages and unemployment seriously, and they estimate wages will rise until ~40% of the pure-intelligence tasks have been automated, and then they'll start falling (noteworthy is that the Anthropic Economic Index, coupled with my own measures of the intelligence sector, suggest we're very close to that point).


One economist who's taken things very seriously is Chad Jones at Stanford, who has engaged not just with the possibility of a technological singularity, but has written papers grappling with existential risk; he estimates we're underfunding safety research by a factor of 30x. Even though he engages with the possibility of a singularity, his latest preprint thinks that (because of those physical components to the economy), the actual economic impact will be very slow at first: only 4%/year by 2030, rising to 10%/year by 2040, and a singularity by 2060 (this is actually his most rapid scenario; the baseline is more like 75 years). His model, like most macro models, doesn't really allow for unemployment effects or other social disruption. Personally I'm skeptical of approaches like this, given that the English "enclosure" laws introduced over 50 years of massive unemployment, but this seems to be the macroeconomist's version of a "spherical cow" (which as a physicist I can hardly begrudge them).


The big factor of course between the CS and economics worlds is belief in recursive self-improvement, and the degree to which the intelligence and physical economies are coupled. Only one paper that I'm aware of has tackled this head-on: a piece by Davidson, XXX, Halperin, and Korinek (I think some of those folks are known in these parts). They explicitly model flywheel effects between software and hardware, and also find that a singularity is a definite possibility. More interesting, much like Jones & Tonetti find that the effect will be very small at first, but will blow up once we get to about 13% of the economy having been automated. An obvious question is: can we get Jones & Tonetti to line up with Davidson et al, and if so how far along this process are we? They're aware of each other's work, but I don't see anything explicit trying to synchronize them and establish a timeline from that; if no one does it soon then I'll work on it.


So we're in the situation where economists have *finally* gotten all the pieces together, and are on the cusp of engaging legitimately with transformative AI. This in turn will get policymakers to take it more seriously, too - including (thanks to Chad Jones!) the possibility of extinction risk, and additional impetus to take legislative action. Even failing that direct action, work like Korinek is gaining traction, and provides non-sci-fi reasons to intervene, which is also helpful.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top