How Humans Can Keep Superintelligent Robots From Murdering Us All

Ultron, an artificially intelligent robotMarvel


While Kevin Drum is focused on getting better, we’ve invited some of the remarkable writers and thinkers who have traded links and ideas with him from Blogosphere 1.0 to this day to contribute posts and keep the conversation going. Today, we’re honored to present a post from Bill Gardner, a health services researcher in Ottawa, Ontario, and a blogger at The Incidental Economist.

This weekend, you, I, and about 100 million other people will see Avengers: Age of Ultron. The story is that Tony Stark builds Ultron, an artificially intelligent robot, to protect Earth. But Ultron decides that the best way to fulfill his mission is to exterminate humanity. Violence ensues.

Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest—including artificial intelligence.

You will likely dismiss the premise of the story. But in a book I highly recommend, Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest. Because one such domain is research in artificial intelligence, the machine would be able to rapidly improve itself.

The abilities of such a machine would quickly transcend our abilities. The difference, Bostrom believes, would not be like that between Einstein and a cognitively disabled person. The difference would be like that between Einstein and a beetle. When this happens, machines can and likely would displace humans as the dominant life form. Humans may be trapped in a dystopia, if they survive at all.

Competent people—Elon Musk, Bill Gates—take this risk seriously. Stephen Hawking and physics Nobel laureate Frank Wilczek worry that we are not thinking hard enough about the future of artificial intelligence.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here—we’ll leave the lights on”? Probably not—but this is more or less what is happening with AI…little serious research is devoted to these issues…All of us…should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.

There are also competent people who dismiss these concerns. University of California-Berkeley philosopher John Searle argues that intelligence requires qualities that computers lack, including consciousness and motivation. This doesn’t mean that we are safe from artificially intelligent machines. Perhaps in the future killer drones will hunt all humans, not just Al Qaeda. But Searle claims that if this happens, it won’t be because the drones reflected on their goals and decided that they needed to kill us. It will be because human beings have programmed drones to kill us.

Searle has made this argument for years, but has never offered a reason why it will always be impossible to engineer machines with autonomy and general intelligence. If it’s not impossible, we need to look for possible paths of human evolution in which we safely benefit from the enormous potential of artificial intelligence.

What can we do? I’m a wild optimist. In my lifetime I have seen an extraordinary expansion of human capabilities for creation and community. Perhaps there is a future in which individual and collective human intelligence can grow rapidly enough that we keep our place as free beings. Perhaps humans can acquire cognitive superpowers.

But the greatest challenge of the future will not be the engineering of this commonwealth, but rather its governance. So we have to think big, think long-term, and live in hope. We need to cooperate as a species and steer our technological development so that we do not create machines that displace us. At the same time, we need to protect ourselves from the expanding surveillance of our current governments (such as China’s Great Firewall or the NSA). I doubt we can achieve this enhanced community unless we also find a way to make sure the superpowers of enhanced cognition are available to everyone. Maybe the only alternative to dystopia will be utopia.

DOES IT FEEL LIKE POLITICS IS AT A BREAKING POINT?

Headshot of Editor in Chief of Mother Jones, Clara Jeffery

It sure feels that way to me, and here at Mother Jones, we’ve been thinking a lot about what journalism needs to do differently, and how we can have the biggest impact.

We kept coming back to one word: corruption. Democracy and the rule of law being undermined by those with wealth and power for their own gain. So we're launching an ambitious Mother Jones Corruption Project to do deep, time-intensive reporting on systemic corruption, and asking the MoJo community to help crowdfund it.

We aim to hire, build a team, and give them the time and space needed to understand how we got here and how we might get out. We want to dig into the forces and decisions that have allowed massive conflicts of interest, influence peddling, and win-at-all-costs politics to flourish.

It's unlike anything we've done, and we have seed funding to get started, but we're looking to raise $500,000 from readers by July when we'll be making key budgeting decisions—and the more resources we have by then, the deeper we can dig. If our plan sounds good to you, please help kickstart it with a tax-deductible donation today.

Thanks for reading—whether or not you can pitch in today, or ever, I'm glad you're with us.

Signed by Clara Jeffery

Clara Jeffery, Editor-in-Chief

We Recommend

Latest

Sign up for our newsletters

Subscribe and we'll send Mother Jones straight to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate