James Pethokoukis points us to a new working paper about economic growth released by the San Francisco Fed this month. Here’s a piece:
Even more speculatively, artificial intelligence and machine learning could allow computers and robots to increasingly replace labor in the production function for goods….In standard growth models, it is quite easy to show that this can lead to a rising capital share — which we intriguingly already see in many countries since around 1980 (Karabarbounis and Neiman, 2013) — and to rising growth rates. In the limit, if capital can replace labor entirely, growth rates could explode, with incomes becoming in?nite in ?nite time.
The Fed paper is particularly amazing when you consider that when outgoing Fed chairman Ben Bernanke mentioned “robotics” in a commencement address last spring, he was the first US central-bank boss to use the word in a speech since Alan Greenspan in 2000. Expect more mentions from Janet Yellen.
Technological progress in AI and robotics — even short of the singularity — raises huge questions about the future of work, mobility, and inequality….What do we make of all those long-range economic and fiscal forecasts from folks at the Fed, Congressional Budget Office, and other expert groups? How do we plan for a future that may be just as revolutionary, if not more so, as the Industrial Revolution?
My long-form take on this is here. The thing that gets me is that so many people continue to think of this as wild speculation. I don’t mean the infinite incomes stuff, which is obviously hyperbole since we’ll always need more than just capital to make the economy run. I just mean the general idea that robots and AI are pretty obviously going to have a huge economic impact in the medium term future. This is something that seems so obvious to me that I’m a little puzzled that there’s anyone left who still doesn’t see it. Nonetheless, an awful lot of people still think of this as science fiction. I put the doubters into four rough buckets:
- Moore’s Law is going to to break down sometime very soon, and we’ll never get the raw computing power we need for true AI.
- There is something mysterious about the human brain that we will never be able to emulate with silicon and software. Maybe something, um, quantum.
- Meh. We’ve been hearing about AI forever. It’s never happened before, it’s not going to happen this time either.
- La la la la la.
#1 is at least plausible. I think we’re too far along for it to be taken very seriously anymore, but you never know. #2 is basically New Age nonsense dressed up as physics. #3 is understandable, but lazy. We heard about going to the moon for a long time too, but it didn’t happen until the technology curve caught up. We’re at the same point with AI. #4 is the group of people who kinda sorta accept that AI is coming, but for various reasons simply don’t want to grapple with what this means. Conservatives don’t like the idea that it almost inevitably will require a much more redistributive society. Liberals don’t like the idea that it might make a lot of standard lefty social programs obsolete.
As a liberal believer, I’ll put myself in the latter camp. I’m not willing to give up on the standard liberal social program because (a) I might be wrong about AI, (b) if I’m not, we’re still going to need variations on these programs, and (c) we still have to deal with the transition period anyway. I assume conservative believers might feel roughly the same way.