Wanted: Conservative Takes on the Robot Revolution

Roberto Parada

It’s been interesting to read the feedback so far to my recent piece about artificial intelligence and robots. Four years ago, when I first wrote about it, I got a fair amount of pushback. This time around, virtually everyone who’s responded has been cheering me on. Is this what it feels like to be Donald Trump at one of his rallies?

Why the change? One reason, I think, is the current economic climate: Trump has made job losses a huge national concern, and even though this has nothing to do with AI—since AI doesn’t exist yet—it’s made a lot of people more open to the possibility of future job losses from any cause. So that’s one piece. The much larger piece, though, is simply that the evidence for progress toward AI has gotten all but undeniable in the intervening years.

Needless to say, this doesn’t mean that AI is a sure thing. All the trends and evidence available to us suggest it’s coming, but no one can ever know for sure how long a trend will last. Maybe we’ll hit a brick wall in 2020 and we won’t even get driverless cars, let alone the kind of AI that puts tens of millions of people out of work. It’s possible. I wouldn’t bet the ranch on it, but it’s possible.

Still, some of the pushback has been interesting. The Robot Revolution isn’t fundamentally a partisan issue, but it’s certainly true that I think conservatives are ill-equipped to deal with it. Over at National Review, Andrew Stuttaford objects:

This thoughtful piece on what ‘robots’ are going to do to employment by Kevin Drum might be published in Mother Jones (and it comes with quite a few Mother Jones flourishes), but take the time to read it, (very) stiff drink in hand.

[Stuttaford then quotes a bit of my piece about how very few policy folks are talking about this, and offers an explanation.]

That, I suspect, is because no one has any ideas that are, for now, politically palatable (Drum lists some policy options, all of which are—to use dully conventional labels—leftish, but they merit much more than a look, even if only to think through why they might be wrong—and what the alternatives might be).

I’m not sure about “quite a few” MoJo flourishes. Maybe one or two. But why quibble? The reason I think conservatives will have a hard time with this subject is that, one way or another, the emergence of cheap and competent AI seems to demand some kind of wealth redistribution. Lefties are willing to accept this and then move on to how best we should do it. Conservatives just don’t like the idea in the first place. But is there any other class of solutions? I’m genuinely interested in hearing a conservative take on this, if there is one. Hell, I’m interested in hearing any take, as long as people are at least starting to think about it.

Here’s another bit of pushback from Adam Ozimek:

Surely if AI and robots are going to be transforming enough industries to cause mass unemployment, then this will include k-12 and college.

AI will monitor human progress much better than a professor can, and they will be able to optimally tailor the curriculum and instruction method to the students to help them achieve their highest possible potential in the way that is the most complementary to machines or fills the optimal niches that robots can’t. This will include educating humans at a young age, and also retraining them.

….This is the paradox. If machines are going to be better than humans at everything, than this includes educating humans. So when you picture humans competing against these super smart machines, you have to include the super smart machines that will help humans achieve their maximum potential. It makes no sense to assume super smart machines competing against humans stuck in today’s human capital production function. That should give the biggest worriers a bit of optimism.

I’m going to propose a, um, slightly different scenario tomorrow, but I certainly accept Ozimek’s argument on its own terms. That said, where’s the paradox? The finest education and upbringing will not turn a dullard into Einstein.¹ On a mass scale, it will almost certainly make society better and smarter than it is now, but the masses of people currently employed in unskilled and semi-skilled jobs are not suddenly going to become summa graduates of Harvard. And if we’re talking about a future where robots are already better than humans at everything, then forget it. Better education won’t make Joe Sixpack complementary to anything. Robots will be the complement to everything.

For what it’s worth, I think arguments like this are always bound to fail. There are really only two basic parts to my case about AI causing mass unemployment:

  1. Current computing trends will more or less continue, and we will begin producing useable AI starting around 2025. Sure, that may be off by a few years in either direction, but it’s coming relatively soon.
  2. General purpose AI, by definition, will be able to fill any new jobs created by AI. This won’t be like the Industrial Revolution, where workers were uprooted but eventually got new jobs tending machines. There just won’t be any jobs that humans are better at.

These are the soft points. If you want to argue against robots eventually taking all the jobs away, you need to persuasively argue that AI just isn’t going to happen any time soon. Moore’s Law is breaking down. IC technology is mature. We still have no idea what we’re doing. Plenty of experts are pessimistic about progress in AI. Etc. I address all this in my piece, but there are obviously reasonable counter-arguments to be made.

Alternatively, you can accept that AI is coming, but somehow argue that there will still be “complementary” jobs for humans. This is a much harder argument to make, I think, but not impossible. Its most popular form is that robots will never have true human empathy, so there will still be plenty of jobs for folks with “soft” social and emotional skills. As it happens, I don’t buy this for a second. We humans are not only easily fooled in our social relationships, we practically beg to be fooled. In 20 or 30 years, robots are likely to be more loved than other humans. Still, this is an argument you can make.

But that’s about it. Short of climate disaster or some kind of enormous revolution in which all the robots are destroyed around the world, these two things are all we need for mass unemployment to be only a couple of decades away. If you want to dispute that, these are the arguments you need to knock down.

¹I’d like to say that generations of experience with the upper classes demonstrates this pretty conclusively, but I guess that’s not quite right, is it? Upper-class twits may have been provided the finest, most personalized education imaginable, but it was education provided by other humans. Still.