How Humans Can Keep Superintelligent Robots From Murdering Us All

Ultron, an artificially intelligent robotMarvel

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.


While Kevin Drum is focused on getting better, we’ve invited some of the remarkable writers and thinkers who have traded links and ideas with him from Blogosphere 1.0 to this day to contribute posts and keep the conversation going. Today, we’re honored to present a post from Bill Gardner, a health services researcher in Ottawa, Ontario, and a blogger at The Incidental Economist.

This weekend, you, I, and about 100 million other people will see Avengers: Age of Ultron. The story is that Tony Stark builds Ultron, an artificially intelligent robot, to protect Earth. But Ultron decides that the best way to fulfill his mission is to exterminate humanity. Violence ensues.

Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest—including artificial intelligence.

You will likely dismiss the premise of the story. But in a book I highly recommend, Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest. Because one such domain is research in artificial intelligence, the machine would be able to rapidly improve itself.

The abilities of such a machine would quickly transcend our abilities. The difference, Bostrom believes, would not be like that between Einstein and a cognitively disabled person. The difference would be like that between Einstein and a beetle. When this happens, machines can and likely would displace humans as the dominant life form. Humans may be trapped in a dystopia, if they survive at all.

Competent people—Elon Musk, Bill Gates—take this risk seriously. Stephen Hawking and physics Nobel laureate Frank Wilczek worry that we are not thinking hard enough about the future of artificial intelligence.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here—we’ll leave the lights on”? Probably not—but this is more or less what is happening with AI…little serious research is devoted to these issues…All of us…should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.

There are also competent people who dismiss these concerns. University of California-Berkeley philosopher John Searle argues that intelligence requires qualities that computers lack, including consciousness and motivation. This doesn’t mean that we are safe from artificially intelligent machines. Perhaps in the future killer drones will hunt all humans, not just Al Qaeda. But Searle claims that if this happens, it won’t be because the drones reflected on their goals and decided that they needed to kill us. It will be because human beings have programmed drones to kill us.

Searle has made this argument for years, but has never offered a reason why it will always be impossible to engineer machines with autonomy and general intelligence. If it’s not impossible, we need to look for possible paths of human evolution in which we safely benefit from the enormous potential of artificial intelligence.

What can we do? I’m a wild optimist. In my lifetime I have seen an extraordinary expansion of human capabilities for creation and community. Perhaps there is a future in which individual and collective human intelligence can grow rapidly enough that we keep our place as free beings. Perhaps humans can acquire cognitive superpowers.

But the greatest challenge of the future will not be the engineering of this commonwealth, but rather its governance. So we have to think big, think long-term, and live in hope. We need to cooperate as a species and steer our technological development so that we do not create machines that displace us. At the same time, we need to protect ourselves from the expanding surveillance of our current governments (such as China’s Great Firewall or the NSA). I doubt we can achieve this enhanced community unless we also find a way to make sure the superpowers of enhanced cognition are available to everyone. Maybe the only alternative to dystopia will be utopia.

WE CAME UP SHORT.

We just wrapped up a shorter-than-normal, urgent-as-ever fundraising drive and we came up about $45,000 short of our $300,000 goal.

That means we're going to have upwards of $350,000, maybe more, to raise in online donations between now and June 30, when our fiscal year ends and we have to get to break-even. And even though there's zero cushion to miss the mark, we won't be all that in your face about our fundraising again until June.

So we urgently need this specific ask, what you're reading right now, to start bringing in more donations than it ever has. The reality, for these next few months and next few years, is that we have to start finding ways to grow our online supporter base in a big way—and we're optimistic we can keep making real headway by being real with you about this.

Because the bottom line: Corporations and powerful people with deep pockets will never sustain the type of journalism Mother Jones exists to do. The only investors who won’t let independent, investigative journalism down are the people who actually care about its future—you.

And we hope you might consider pitching in before moving on to whatever it is you're about to do next. We really need to see if we'll be able to raise more with this real estate on a daily basis than we have been, so we're hoping to see a promising start.

payment methods

WE CAME UP SHORT.

We just wrapped up a shorter-than-normal, urgent-as-ever fundraising drive and we came up about $45,000 short of our $300,000 goal.

That means we're going to have upwards of $350,000, maybe more, to raise in online donations between now and June 30, when our fiscal year ends and we have to get to break-even. And even though there's zero cushion to miss the mark, we won't be all that in your face about our fundraising again until June.

So we urgently need this specific ask, what you're reading right now, to start bringing in more donations than it ever has. The reality, for these next few months and next few years, is that we have to start finding ways to grow our online supporter base in a big way—and we're optimistic we can keep making real headway by being real with you about this.

Because the bottom line: Corporations and powerful people with deep pockets will never sustain the type of journalism Mother Jones exists to do. The only investors who won’t let independent, investigative journalism down are the people who actually care about its future—you.

And we hope you might consider pitching in before moving on to whatever it is you're about to do next. We really need to see if we'll be able to raise more with this real estate on a daily basis than we have been, so we're hoping to see a promising start.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate