Page 1 of 2

Welcome, Robot Overlords. Please Don't Fire Us?

Smart machines probably won't kill us all—but they'll definitely take our jobs, and sooner than you think.

This is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.

The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.

Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover of Wired. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when the Dartmouth Summer Research Project on Artificial Intelligence gave the field its name, and there are only so many times you can cry wolf. Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.

But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.

What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?

Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.

By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.

At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.

So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?

But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.

lake michigan filling up as metaphor for approaching artificial intelligence

IF YOU HAVE ANY KIND OF BACKGROUND in computers, you've already figured out that I didn't pick these numbers out of a hat. I started in 1940 because that's about when the first programmable computer was invented. I chose a doubling time of 18 months because of a cornerstone of computer history called Moore's Law, which famously estimates that computing power doubles approximately every 18 months. And I chose Lake Michigan because its size, in fluid ounces, is roughly the same as the computing power of the human brain measured in calculations per second.

In other words, just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.

True artificial intelligence really is around the corner, and it really will make life easier. But first we face vast economic upheaval. 

And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke. Sure, it's a billion times more than the first computer had, but it's still not much more than the computing power of a hamster.

This is why, even with the IT industry barreling forward relentlessly, it has never seemed like we were making any real progress on the AI front. But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low. At one point, for example, we thought that playing chess at a high level would be a mark of human-level intelligence. Then, in 1997, IBM's Deep Blue supercomputer beat world champion Garry Kasparov, and suddenly we decided that playing grandmaster-level chess didn't imply high intelligence after all.

So maybe translating human languages would be a fair test? Google Translate does a passable job of that these days. Recognizing human voices and responding appropriately? Siri mostly does that, and better systems are on the near horizon. Understanding the world well enough to win a round of Jeopardy! against human competition? A few years ago IBM's Watson supercomputer beat the two best human Jeopardy! champions of all time. Driving a car? Google has already logged more than 300,000 miles in its driverless cars, and in another decade they may be commercially available.

The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though. Computing power is measured in calculations per second—a.k.a. floating-point operations per second, or "flops"—and the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops.

Of course, raw speed isn't everything. Livermore's Blue Gene/Q fills a room, requires eight megawatts of power to run, and costs about $250 million. What's more, it achieves its speed not with a single superfast processor, but with 1.6 million ordinary processor cores running simultaneously. While that kind of massive parallel processing is ideally suited for nuclear-weapons testing, we don't know yet if it will be effective for producing AI.

But plenty of people are trying to figure it out. Earlier this year, the European Commission chose two big research endeavors to receive a half billion euros each, and one of them was the Human Brain Project led by Henry Markram, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. He uses another IBM super­computer in a project aimed at modeling the entire human brain. Markram figures he can do this by 2020.

The Luddites weren't wrong. They were just 200 years too early.

That might be optimistic. At the same time, it also might turn out that we don't need to model a human brain in the first place. After all, when the Wright brothers built the first airplane, they didn't model it after a bird with flapping wings. Just as there's more than one way to fly, there's probably more than one way to think, too.

Google's driverless car, for example, doesn't navigate the road the way humans do. It uses four radars, a 64-beam laser range finder, a camera, GPS, and extremely detailed high-res maps. What's more, Google engineers drive along test routes to record data before they let the self-driving cars loose.

Is this disappointing? In a way, yes: Google has to do all this to make up for the fact that the car can't do what any human can do while also singing along to the radio, chugging a venti, and making a mental note to pick up the laundry. But that's a cramped view. Even when processing power and software get better, there's no reason to think that a driverless car should replicate the way humans drive. They will have access to far more information than we do, and unlike us they'll have the power to make use of it in real time. And they'll never get distracted when the phone rings.

True artificial intelligence will very likely be here within a couple of decades. By about 2040 our robot paradise awaits.

In other words, you should still be impressed. When we think of human cognition, we usually think about things like composing music or writing a novel. But a big part of the human brain is dedicated to more prosaic functions, like taking in a chaotic visual field and recognizing the thousands of separate objects it contains. We do that so automatically we hardly even think of it as intelligence. But it is, and the fact that Google's car can do it at all is a real breakthrough.

The exact pace of future progress remains uncertain. For example, some physicists think that Moore's Law may break down in the near future and constrain the growth of computing power. We also probably have to break lots of barriers in our knowledge of neuroscience before we can write the software that does all the things a human brain can do. We have to figure out how to make petaflop computers smaller and cheaper. And it's possible that the 10-petaflop estimate of human computing power is too low in the first place.

Advertise on

Nonetheless, in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.

In other words, by about 2040 our robot paradise awaits.

Page 1 of 2
Get Mother Jones by Email - Free. Like what you're reading? Get the best of MoJo three times a week.